AdoptOS

Assistance with Open Source adoption

ETL

Introducing Talend API Services: Providing Best in Class Purpose-Built Applications

Talend - Thu, 11/15/2018 - 12:50

Have you heard?  Talend Fall ’18 is here and continues on Talend’s plan to meet the challenges of today’s data professionals around organizing, processing and sharing data at scale. Earlier Jean-Michel Franco wrote up about the Data Catalog portion of this exciting Fall 2018 launch. In this blog I’d like to focus on our new API features.

For many organizations, APIs are no longer just technological creations of engineers to connect components of a distributed system. Today, APIs are directly driving business revenues, enabling innovation, and are the source of connectivity with partners.

With our Fall ‘18 Launch, Talend Cloud will include a new API delivery platform, Talend Cloud API Services. Our delivery platform provides best in class purpose-built applications for API-first creation, testing, and documentation. Essentially, this platform enables organizations to be more agile in their API development. The platform provides productivity gains compared to hand coding through easy to use graphical design supporting both technical and less-technical personal.

Additional enhancements to existing tools for API implementation and operation ensures organizations have a comprehensive approach to building user-friendly data APIs. And finally, Talend Cloud’s full support for open standards such as OAS, Swagger, and RAML makes the Talend API delivery platform complementary to existing third-party API gateways and catalogs. Allowing for easy implementation with your existing gateway or catalog.

Talend Cloud API Designer

With Fall ’18, Talend Cloud provides a new purpose-built application for designing API contracts visually instead of having to go the traditional route of hand coding. Developers can start from scratch or import an existing OAS / RAML definition. The interface allows developers to define API data types, resources, operations, parameters, responses, media types, and errors.

Once a developer is finished defining their contract, the API designer will generate the OAS / RAML definition for you! This can be used later as part of the service(s) creation or imported into an existing API gateway / API catalog. I took a quick screenshot to show what the interface is going to look like.

Now I know how much everyone likes to write documentation (or maybe not). Thankfully with the API designer, the basic documentation is auto-generated for you. Users can then host it on Talend Cloud and easily share it with end consumers in a public or private mode. Talend Cloud API Designer also provides users with the ability to extend the generated documentation through an included rich text editor. Below is an example of the documentation generated by Talend Cloud API Designer.

Talend Cloud API Designer also provides Automatic API mocking that can act as a live preview for end consumers, decoupling support for consumer application development while the backend services are developed. Mocked API’s can return data specified during the contract design or automatically generate the data based on the defined data structure.

This mock is kept up to date throughout the development process and enabled using the interface below. This will be a huge benefit for the consumers of my API, they won’t have to wait for me to finish building the back end before they start writing their applications. It’s pretty easy to turn on inside API designer. A single click and users are off to the races.

Talend Cloud API Tester

Fall ’18 also includes a new application within Talend Cloud called Talend Cloud API Tester. Though this interface, QA / DevOps teams can easily call and inspect any HTTP based API. It works with complex JSON or XML responses enabling teams to validate the API’s behavior. Calls made are stored so I can easily look back into my history for what I’ve done before. An example of the interface is shown below.

My favorite feature of Talend Cloud API Tester is the ability to chain API calls together to create scenarios. These chained requests can utilize data returned from a previous call as parameters in the next call enabling teams to create real-world examples of how the API will be used. Thankfully this will keep my notepad++ tabs down to a minimum. An example of this scenario design is provided below.

Throughout the testing process, I can define assertions to help validate API responses. Responses can check payloads for completeness, how timely a response was or even if a field has a specific value. Here’s an example of an assertion I made recently.

The last benefit I’d like to highlight is that test cases created using API Tester can be exported for use within a continuous integration / continuous delivery pipeline ensuring further updates to the API’s maintain consistency.

Talend Studio for API implementation

We’ve made it simple to start working with the API’s built-in API designer. There’s a new metadata group called REST API definitions.  A couple clicks and I’ve downloaded the API and am ready to start building.

We can use this contract to bootstrap a service using the contract’s defined URIs, media types, parameters, etc. This approach expedites delivery of the backend service by reducing the complexity of defining the various functions.

There are also some updates to Talend Data Mapper, I can use the defined schema from the API definition as the return schema from TDM!  It is a lot easier converting data into the expected media type/structure. 

Talend Cloud for API Operation     

Yep, Talend cloud can now manage the services you’ve built in the studio, just like we manage data integration jobs. If this is your first time hearing about Talend Cloud, it provides a managed environment that enables developers to publish services developed in Talend studio into the Talend Management Console’s artifact repository or a third-party repository and manage the various environments the service needs to be deployed as part of the QA / DevOps process. An example of this management can be seen in the snippet

As you can see there is a mountain of functionality available in the new Talend Cloud API services. If you’d like to see more of this stay tuned we have a series of videos/enablement material to get you all up to speed.

I look forward to hearing about the API’s you plan to build and stay tuned to upcoming blogs from Talend if you’re looking for some inspiration. I’ll be following this blog up with a series of use cases we’ve seen and are hearing about!

The post Introducing Talend API Services: Providing Best in Class Purpose-Built Applications appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

SnapLogic Data Science brings self-service to machine learning

SnapLogic - Wed, 11/14/2018 - 07:30

Today, we announced the launch of SnapLogic Data Science, a visual self-service solution for the entire machine learning (ML) lifecycle. SnapLogic Data Science, together with SnapLogic’s award-winning integration platform, the Enterprise Integration Cloud, supports data sourcing, data preparation and feature engineering, and the training, validation, and deployment of machine learning models all in one platform.[...] Read the full article here.

The post SnapLogic Data Science brings self-service to machine learning appeared first on SnapLogic.

Categories: ETL

SnapLogic November 2018 Release: Revolutionize your business with intelligent integration

SnapLogic - Wed, 11/14/2018 - 07:29

We are thrilled to announce the general availability of the November 2018, 4.15 release of the SnapLogic integration platform. This release introduces several new solutions including SnapLogic Data Science, SnapLogic API Management, and SnapLogic for B2B Integration. It also includes core platform enhancements and new feature-rich Snaps. These powerful new capabilities will enable CIOs and[...] Read the full article here.

The post SnapLogic November 2018 Release: Revolutionize your business with intelligent integration appeared first on SnapLogic.

Categories: ETL

Simplifying Data Warehouse Optimization

Talend - Sat, 11/10/2018 - 13:04

When I hear the phrase “Data Warehouse Optimization”, shivers go down my spine.  It sounds like such a complicated undertaking.  After all, data warehouses are big, cumbersome and complex systems that can store terabytes and even petabytes of data that people depend on to make important decisions on the way their business is run.  The thought of any type of tinkering with such an integral part of a modern business would make even the most seasoned CIO’s break out into cold sweats.

However, the value of optimizing a data warehouse isn’t often disputed.  Minimizing costs and increasing performance are mainstays on the “to-do” lists of all Chief Information Officers.  But that is just the tip of the proverbial iceberg.  Maximize availability.  Increase data quality.  Limit data anomalies.  Eliminate depreciating overhead.  These are the challenges that become increasingly more difficult to achieve when stuck with unadaptable technologies and confined by rigid hardware specifications.

The Data Warehouse of the Past

Let me put it into some perspective.  Not long ago many of today’s technologies (i.e. Big Data Analytics, Spark engines for processing and Cloud Computing and storage) didn’t exist,  yet the reality of balancing the availability of quality data with the efforts required to cleanse and load the latest information proved a constant challenge.  Every month, IT was burdened with loading the latest data into the data warehouse for the business to analyze.  However, often the loading itself took days to complete and if the load failed, or worse, the data warehouse became corrupted, recovery efforts could take weeks.  By the time last month’s errors were corrected, this month’s data needed to be loaded. 

It was an endless cycle that produced little value.  Not only was the warehouse out-of-date with its information, but it was also tied up in data loading and data recovery processes, thus making it unavailable to the end user.  With the added challenges of today’s continuously increasing data volumes, a wide array of data sources and more demands from the business for real-time data in their analysis, the data warehouse needs to be a nimble and flexible repository of information, rather than a workhorse of processing power.

Today’s Data Warehouse Needs

In this day and age, CIO’s can rest easy knowing that optimizing a data warehouse doesn’t have to be so daunting.  With the availability of Big Data Analytics, lightning-quick processing with Apache Spark, and the seemingly limitless and instantaneous scalability of the cloud, there are surely many approaches one can take to address the optimization conundrum.  But I have found the most effective approach to simplifying data warehouse optimization (and providing the biggest return on investment) is to remove unnecessary processing (i.e. data processing, transformation and cleansing) from the warehouse itself.  By removing the inherent burden of ETL processes, the warehouse has nearly instantaneously increased availability and performance.  This is commonly referred to as “Offloading ETL”. 

This isn’t to say that the data doesn’t need to be processed, transformed and cleansed.  On the contrary, data quality is of utmost importance.  But relying on the same systems that serve up the data to be responsible for processing and transforming the data is robbing the warehouse of its sole purpose; providing accurate, reliable and up-to-date analysis to end-users in a timely fashion, with minimal downtime.  By utilizing Spark and it’s in-memory processing architecture, you can shift the burden of ETL onto other in-house servers designed for such workloads. Or better yet, shift the processing to the cloud’s scalable infrastructure and not only optimize your data warehouse, but ultimately cut IT spend by eliminating the capital overhead of unnecessary hardware.

Talend Big Data & Machine Learning Sandbox

In the new Talend Big Data and Machine Learning Sandbox, one such example illustrates how effective ETL Offloading can be.  Utilizing Talend Big Data and Spark, IT can work with business analysts to perform Pre-load analytics – analyzing the data in its raw form, before it is loaded into a warehouse – in a fraction of the time of standard ETL.  Not only does this give business users insight into the quality of the data before it is loaded into the warehouse, it also allows IT a sort of security checkpoint to prevent poor data from corrupting the warehouse and causing additional outages and challenges.

Optimizing a data warehouse can surely produce a fair share of challenges.  But sometimes the best solution doesn’t have to be the most complicated.  That is why Talend offers industry leading data quality, native Spark connectivity and subscription-based affordability, giving you a jump-start on your optimization strategy.  Further, Data Integration tools need to be as nimble as the systems they are integrating.  Therefore, leveraging Talend’s future-proof architecture means you will never be out of style with the latest technology trends; giving you piece of mind that today’s solutions won’t become tomorrow’s problems.

Download the Talend Big Data and Machine Learning Sandbox today and dive into our cookbook

The post Simplifying Data Warehouse Optimization appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

4 Ways You Should be Using the Talend tMap Component

Talend - Fri, 11/09/2018 - 12:36

At Talend one of my “Shadow IT” jobs is reporting on the component usage. If you have ever used Talend Studio (either the open source Talend Open Studio or the commercial version you most likely know the component tMap

It is the most used component by a long shot. Why? Simply put, it’s because it is extremely versatile and useful.  When I was first asked to write this article I thought, “Why not, this will be easy.”  But, as I started actually going about picking out use cases I quickly realized that there are so many more than just 4 to choose from! I actually challenge everyone to respond and tell me below what you think the top features are of the tMap component. I would love to hear from you.

I will start out by listing the most obvious but most needed and then get into some more advanced uses.  (I will try and sneak a couple in together so I can have more than 4, hopefully, our editor doesn’t catch it.)

Editors note: I did catch it, but that’s just fine Mark.

#1 Mapping, of course. 

The tMap’s most basic use is to map inputs to outputs.  This can be as simple as Source fields to Target fields of the data integration job.  It can also be from some other input components like aggregators, matching or data quality components. 

With the tMap you can also limit the fields mapped from left to right, basically filtering unneeded columns.  You can create new columns coming out of the tMap, say for example adding sequence keys, or concatenating multiple input columns into a new column, for example address fields into one column to make a single mailing data field.   This leads into the next big use case I want to cover….

#2 Expression Builder. 

Within the tMap on any column or variable you can open the expression builder wizard where you get access to hundreds of Talend functions, and if you can’t find a Talend function that meets your needs then you can fall back on a native Java function (don’t fear, if you don’t know Java just Google it).

If you happen to know some Java, you can easily build custom Java routines which will then be available within the expression builder.   Also, expression builder allows you do complex math on multiple fields if needed. You can extract parts of dates, do data conversions, even case statements to build in “what if” logic.  The same conditional statements can be used to determine if a row should pass through the tMap at all, acting as a filter.  As you can see with the tMap Expression Builder you get great transformation powers.

#3 Lookups. 

The tMap is where you can do what many Data Integration specialists refer to as “lookups” on data. For those unfamiliar, this is basically joining data from one source to another source.  The tMap has a lot of functionality on lookups, like inner join or outer join, reject if join is not found, cache the lookup data and much more.  Lookups are critical to data transformation process as you often need to pull in reference data or get expanded views of records. 

With Talend the lookup source can be anything that can be sourced into a Talend job.  This has almost endless possibilities.  To illustrate, let’s imagine a multi-cloud scenario quickly. Let’s say you have customer data in AWS on S3 and some other critical data on Azure Blob storage. in a single Talend job using tMap and lookups you can easily join the two sources together and write your data anywhere you need, like say Google BigQuery just to be crazy!

#4 Route Multiple Outputs. 

The tMap can only have one input (not counting Lookups) but you can have multiple outputs with any number of columns as outputs on each stream. 

This becomes a fast and powerful way to route errors down a different flow or to just duplicate the data flows to different streams one flow can go to an aggregation component while another output could go direct to your target outputs and a third have a conditional statement looking for errors.  All this becomes extremely useful as your data flows become complex with multiple outputs, error processing and conditional outputs.

Conclusion

There you have it, a quick and (hopefully) helpful introduction to our most popular component in Talend. If you want some more tMap knowledge, let me know in the comments below and I’ll spin up my next article around some more advanced mapping functionalities. Happy connecting!

 

The post 4 Ways You Should be Using the Talend tMap Component appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

It’s Official! Talend to Welcome Stitch to the Family!

Talend - Wed, 11/07/2018 - 16:12
The Acquisition

Today, Talend announced it has signed a definitive agreement to acquire Stitch, a self-service cloud data integration company that provides an exceptionally quick, easy, and intuitive experience in modern cloud environments. With Stitch, Talend will be able to provide both SMB and enterprise customers with a highly efficient way to move data from cloud sources to cloud data warehouses and buy in a frictionless manner.

As companies standardize on using the cloud for analytics, Talend Cloud has become a compelling solution for customers of all sizes to meet their complete data and application needs. We have developed Talend Cloud into the ideal choice by bringing together a broad range of functionality in one platform – from native big data and embedded data quality to enterprise-level CI/CD capabilities and data governance. Now, with the addition of Stitch, Talend customers have an even more comprehensive tool to complete their cloud-first and digital transformation mission.

Why Stitch?

To improve customer experiences, companies need to collect and analyze vast amounts of data across cloud and on-premises systems. When we look deeper at the challenge, it is often the data analysts and business analysts to data engineers who want to want to collect data from their cloud apps, such as Salesforce, Marketo and Google Analytics, and put it into a cloud data warehouse such as Amazon Redshift and Snowflake. And when we look across a company, each department, from marketing to finance to HR to manufacturing has this need to collect more data and derive more insight.

Unfortunately, many companies struggle to efficiently collect data which means they cannot reach their data-driven potential. There may be an IT bottleneck, it may take time to get the systems up and running, or the line of business may be doing it with inefficient hand-coding. This is the problem that Stitch addresses, it provides self-service tools, in the cloud, that automate loading data into cloud data warehouses. And the process of getting started and loading data just takes minutes. So now anyone in a company can easily and quickly load data into a cloud data lake or data warehouse.

At Talend, we saw this emergence of a new data integration category and how it would immediately benefit our customers. Talend provides tools that address all types of integration complexity, where you build data pipelines to collect, govern, transform and share data.  Stitch provides a complementary solution that will enable many more people in an organization to collect more data, which can then be governed, transformed and shared with Talend, which will mean faster and better insight for all.

Stitch is Available for Free Trial Now

Over the next few months, we will build out more features and services that are part of our focus on addressing any integration use cases by connecting any data and application with Talend Cloud in a seamless and frictionless manner.

Stitch is available for purchase or evaluation today. Sign up for a free trial at https://www.stitchdata.com/. For complex integration use cases, try Talend Cloud for 30 days for free at https://cloud.talend.com/

The post It’s Official! Talend to Welcome Stitch to the Family! appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

How a revamped data architecture improved the student application process at Boston University

SnapLogic - Thu, 11/01/2018 - 16:51

Founded in 1839, Boston University (BU) is one of the largest private universities in the United States with more than 34,000 enrolled students. The university remains dedicated to its founding principles that are tethered to the belief that higher education should be accessible to all. BU receives close to 65,000 applications each year with each[...] Read the full article here.

The post How a revamped data architecture improved the student application process at Boston University appeared first on SnapLogic.

Categories: ETL

SnapLogic’s Elizabeth Loar recognized as a top woman in SaaS

SnapLogic - Wed, 10/31/2018 - 13:00

Collaborative. Strategic. Data-driven. Results-oriented. Many of us aspire to these qualities at work but SnapLogic’s VP of Finance and Administration Elizabeth Loar lives them everyday, and it is precisely why she was named to The SaaS Report’s “Top Women Leaders in SaaS for 2018.” Each year, The SaaS Report (TSR) recognizes the top women in[...] Read the full article here.

The post SnapLogic’s Elizabeth Loar recognized as a top woman in SaaS appeared first on SnapLogic.

Categories: ETL

Machine learning has a data integration problem: The need for self-service

SnapLogic - Tue, 10/30/2018 - 12:54

When we built the Iris Integration Assistant, an AI-powered recommendation engine, it was SnapLogic’s first foray into machine learning (ML). While the experience left us with many useful insights, one stood out above the rest: machine learning, we discovered, is full of data integration challenges. Of course, going into the process, we understood that developing[...] Read the full article here.

The post Machine learning has a data integration problem: The need for self-service appeared first on SnapLogic.

Categories: ETL

Continuous Integration Best Practices – Part 2

Talend - Tue, 10/30/2018 - 11:36

This is the second part of my blog series on CI/CD best practices. For those of you who are new to this blog, please refer to Part 1 of the same series and for those who want to see the first 10 best practices. Also, I want to give a big thank you for all the support and feedback! In my last blog, we saw the first ten best practices when working with Continuous integration. In this blog, I want to touch on some more best practices. So, with that, let’s jump right in!

Best Practice 11 – Run Your Fastest Tests First

The CI/CD system serves as a channel for all changes entering your system and hence discovering failures as early as possible is important to minimize the resources devoted to problematic builds. It is recommended you to prioritize and run your fastest tests first and save the complex, long-running tests for later. Once you have validated the build with smaller, quick-running tests and have ensured that the build looks good initially, you could test your complex and long-running test cases.

Following this best practice will keep your CI/CD process healthy by enabling you to understand the performance impact of individual tests as well as complete most of your tests early. It also increases the likelihood of fast failures. Which means that problematic changes can be reverted or fixed before blocking another members’ work.

Best Practice 12 – Minimize Branching

One of the version control best practices is integrating changes into the parent branch/shared repository early and often. This helps avoid costly integration problems down the line when multiple developers attempt to merge large, divergent, and conflicting changes into the main branch of the repository in preparation for release.

To take advantage of the benefits that CI provides, it is best to limit the number and scope of branches in your repository. It is suggested that developers merge changes from their local branches to remote branch at least once a day. The branches which are not being tracked by your CI/CD systems contain untested code and it should be discarded as it is a threat to the stable code.

Best Practice 13 – Run Tests Locally

Another best practice (and it is related to my earlier point about discovering failures early) is that teams should be encouraged to run as many tests locally prior to committing to the shared repository. This ensures that the piece of code you are working on is good and it will also make it easy to recognize unforeseen problems after integrating the code to the master code.

Best Practice 14 – Write extensive test cases

To be successful with CI/CD one needs a test suit to test every code which gets integrated. As a best practice it is recommended to develop a culture of writing tests for each code change which is integrate into the master branch. The test cases should include unit, integration, regression, smoke tests or any kind of test which covers the end to end project.

Best Practice 15 – Rollback

This is probably the most important part of any implementation. As a next best practice, always think of an easy way to roll back the changes if something goes wrong. Normally I have seen organizations doing a rollback via redeploying the previous release or redoing the build from the stable code.

Best Practice 16 – Deploy the Same Way Every Time

I have seen organizations having multiple ways of CI/CD pipeline. It varies from using multiple tools to multiple mechanisms. Though not a very prominent one but as a best practice, I would say deploy the code same way every time to avoid unnecessary issues like configuration and maintenance across environments. If you are using same deploy method every time, then the same set of steps are triggered in all environments making it easier to understand and maintain. 

Best Practice 17 – Automate the build and deployment

Although the manual build and deployment can work, I would recommend you automate the pipeline. Automation to start with eliminates all the manual errors but further to that ensures that the development team only works on the latest source code from the repository and compiles it every time to build the final product. With lots of tools available like Jenkins, Bamboo, BitBucket etc it is very easy to automate the create the workspace, compile the code, convert it to binaries and publish it to Nexus.

Best Practice 18 – Measure Your Pipeline Success

It is a good practice to track the CI/CD pipeline success rate. You could measure these both ways, before & after you start automation and compare the results. Although the metrics for evaluating CI/CD success rate depends on organizations, some of the points to be considered are:

  • Number of jobs deployed monthly/weekly/daily to Dev/TEST/Pre-PROD/PROD
  • Number of Successful/Failure Builds
  • Time is taken for each build
  • Rollback time
Best Practice 19 – Backup

Your CI/CD pipeline has lot of process/steps involved and as the next best practice I recommend you take periodic backup of your pipeline. If using Jenkins, this can be accomplished via Backup Manager as shown below.

Best Practice 20 – Clean up CI/CD environment

Lots of builds can make your CI/CD system too clumsy and over a period of time and it might impact the overall performance. As the next best practice, I recommend you to clean the CI/CD server periodically. This cleaning could include CI pipeline, temporary workspace, nexus repositories etc.

Conclusion

And with this, I come to an end of the two-part series blog. I hope these best practices are helpful and you would embed these while working with CI/CD.  

The post Continuous Integration Best Practices – Part 2 appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Getting Started with Talend Open Studio: Preparing Your Environment

Talend - Mon, 10/29/2018 - 09:56

With millions of downloads, Talend Open Studio is the leading open source data integration solution. Talend makes it easy to design and deploy integration jobs quickly with graphical tools, native code generation, and hundreds of pre-built components and connectors. Sometimes people like to get some more resources they can use to get started with Talend Open Studio, so we have put some of them together in this blog and in a webinar on-demand titled “Introduction to Talend Open Studio for Data Integration.”

In this first blog, we will be discussing how to prepare your environment to ensure you have a smooth download and install process. Additionally, we will go through a quick tour of the tool so that you can what it has to offer as you begin to with Talend Open Studio.

Remember, if you want to follow along with us in this tutorial you can download Talend Open Studio here.

Getting Ready to Install Talend Open Studio

Before installing, there are a couple prerequisites to address: First, make sure you have enough memory and disk space to complete the install (see the documentation for more information). Second, make sure that you have downloaded the most recent version of the Java 8 JDK Oracle (as found within the Java SE Runtime Environment 8 Downloads page on their website) as Java 9 is not yet supported.

If you’re strictly a Mac user, you need to have Java 8 version 151 installed. To find out which version you have currently on your machine, open your command prompt and search for java-version. Here, you can see that Java 8 is already installed.

You can also discover the version a couple other ways, including from within the “About Java” section from within Java’s control panel.

So now we need to set up a JAVA_HOME environment variable. To do this, right click on This PC and open the properties. Then select Advanced System Settings and under Environment Variables, click New to create a new variable. Name your new variable JAVA_HOME, enter the path to your Java environment and click ok. Under System Variables, add the same variable information and then click ok.

Installing Talend Open Studio

Alright, now the environment is ready for an Open Studio install. If you haven’t already, go to Talend’s official download page and locate the “download free tool” link for Open Studio for Data Integration. Once the installation folder is downloaded, open it and save the extracted files to a new folder. To follow best practice, create a new folder within your C drive. From here you can officially begin the install.

Once it’s finished, open the newly created TOS folder on your C drive and drill in to locate the application link you need to launch Open Studio. If you have the correct java version installed, have enough available memory and RAM and completed all the prerequisites you should easily be able to launch Talend Open Studio on your machine.

When launching the program for the first time, you are presented with the User License Agreement, which you need to read and accept the terms. Now you’re given the chance to create a new project, import a demo project or import an existing project. To explore pre-made projects, feel free to import a demo project or to start with your own project right away, create a new project.  

Upon opening Studio for the first time, you will need to install some required additional Talend Packages—specifically the package containing all required third-party libraries. These are external modules that are required for the software to run correctly. Before clicking Finish, you must accept all licenses for each library you install.

Getting to Know Your New Tool

Next, let’s walk through some of Open Studio’s features. The initial start-up presents a demo project for you to play around with in order to get familiar with Studio. To build out this project, we need to start within the heart of Open Studio: the Repository. The Repository—found on the left side of the screen— is where data is gathered related to the technical items used to design jobs. This is where you can manage metadata (database connections, database tables, as well as columns) and jobs once you begin creating them.  

You can drag and drop this metadata from the Repository into the “Design Workspace”. This is where you lay out and design Jobs (or “flows”). You can view the job graphically within the Designer tab or use the Code tab to see the code generated and identify any possible errors.

To the right, you can access the Component Pallet, which contains hundreds of different technical components used to build your jobs, grouped together in families. A component is a preconfigured connector used to perform a specific data integration operation. It can minimize the amounts of hand-coding required to work on data from multiple heterogeneous sources.

As you build out your job, you can reference the job details within the “Component Tabs” below the Design Workspace. Each following tab displays the properties of the selected element within the design workspace. It’s here that component properties and parameters can be added or edited. Finally, next to the Components Tab, you can find the Run tab, which lets you execute your job. We hope this has been useful, and in our next blog, we will build a simple job moving data into a cloud data warehouse. Want to see more tutorials? Comment below to share what videos would be most helpful to you when starting your journey with Talend Open Studio for Data Integration.

The post Getting Started with Talend Open Studio: Preparing Your Environment appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Three reasons why you need to modernize your legacy enterprise data architecture

SnapLogic - Fri, 10/26/2018 - 13:33

Previously published on itproportal.com. A system must undergo “modernization” when it can no longer address contemporary problems sufficiently. Many systems that now need overhauling were once the best available options for dealing with certain challenges. But the challenges they solved were confined to the business, technological, and regulatory environments in which they were conceived. Informatica,[...] Read the full article here.

The post Three reasons why you need to modernize your legacy enterprise data architecture appeared first on SnapLogic.

Categories: ETL

Future-proof your API lifecycle strategy

SnapLogic - Thu, 10/25/2018 - 13:32

Every software professional knows what an application programming interface (API) is, right? It’s a decades-old technology designed to allow data flow between different applications. Today, modern APIs do much more than just enable inter-application data exchange. They are the foundation of new business processes, the lifeblood of customer-centric innovation. APIs speed new business processes into production,[...] Read the full article here.

The post Future-proof your API lifecycle strategy appeared first on SnapLogic.

Categories: ETL

Continuous Integration Best Practices – Part 1

Talend - Tue, 10/23/2018 - 18:00

In this blog, I want to highlight some of the best practices that I’ve come across as I’ve implemented continuous integration with Talend. For those of you who are new to CI/CD please go through the part 1 and part 2 of my previous blogs on ‘Continuous Integration and workflows with Talend and Jenkins’. This blog would also introduce you to some basic guidance on how to implement and maintain a CI/CD system. These recommendations will help in improving the effectiveness of CI/CD.

Without any further delay – let’s jump right in!

Best Practice 1 – Use Talend Recommended Architectures

For every product Talend offers, there is also a recommended architecture. For details on the recommended architecture for our different products please refer to our Git repo: https://talendpnp.github.io/ . This repository has details on every product Talend offers. It’s truly a great resource, however, for this blog I am focusing only on the CI/CD aspect with Data Integration platform.

In the architecture for Talend Data Integration, it’s recommended to have a separate Software Development Life Cycle (SDLC) environment. This SDLC environment should typically consist of an artifact repository like Nexus, a version control, server tool like Git, Talend Commandline, Maven, and the Talend CI Builder Plugin. The SDLC server would typically look as follows (the optional components are marked in yellow):

As the picture shows, the recommendation suggests having separate servers for Nexus, Version control system and CI server.  One Nexus is shared across all environments. All environments access the binaries stored in Nexus for job execution.  The version control system is needed only in the development environment. This server is also not accessed from other environments.

Best Practice 2 –  Version Control System

Continuous integration requires working with a version control system and hence it is important to have a healthy, clean system for the CI work fast. Talend recommends a few best practices while working with Git. Please go through the links below on best practices while working with Git.

Best Practice 3 –  Continuous Integration Environment

Talend recommends 4 environments with a continuous integration set up (see below). The best practice is illustrated with Git, Jenkins and Nexus as an example. The GIT in the SDLC server is only accessible from the Development Environment.  Other environments cannot access the GIT.

All the coding activities take place in the development environment and are pushed to the version control system Git. The CI/CD process takes the code from Git, converts it into binaries and publishes the code to Nexus Snapshot repository. All non-prod environments have access to Nexus Release and Snapshot repository, however, the production environment has access only to the Release repository.

It is important to note that One nexus is shared among all environments.

Best Practice 4 –  Maintaining Nexus

The artifact repository Nexus plays a very vital role in continuous integration. Nexus is used by Talend not only for providing software updates/patches but is also used as an artifact repository to hold the job binaries.

These binaries are then called via the Talend Administrator Center or Talend Cloud to execute the jobs. If your Talend Studio is connected with the Talend Administration Centre, all the Nexus artifact repository settings are automatically retrieved from the Talend Administration Center. You can choose to use the retrieved settings to publish your Jobs or configure your own artifact repositories.

If your Studio is working on a local connection, all the fields are pre-filled with the locally-stored default settings.

Now, with respect to CI/CD, as a best practice, it is recommended to upload the CI builder plugin and all the external jar files used by the project to the third-party repository in Nexus. The third-party folder will look as given below once Talend ci builder is uploaded.

Best Practice 5 –  Release and Snapshot Artifacts

To implement a CI/CD pipeline it is important to understand the difference between release and snapshot Artifacts/Repositories. Release artifacts are stable and everlasting in order to guarantee that builds which depend on them are repeatable over time. By default, the Release repositories do not allow redeployment. Snapshots capture a work in progress and are used during development. By default, snapshot repositories allow redeployment.

As a best practice, it is recommended that development teams learn the difference between the repositories and to implement the pipeline in such a way that the development artifacts refer to the Snapshot repository and rest of the environments refer only to the Release Repository.

Best Practice 6 – Isolate and Secure the CI/CD Environment

The Talend Recommended Architecture talks about “isolating” the CI/CD environment. The CI/CD system typically has some of the most critical information, complete access to your source/target, has all credentials and hence it is critically important to secure and safeguard the CI/CD system.

As a best practice, the CI/CD system should be deployed to internal and protected networks. Setting up VPNs or other network access control technology is recommended to ensure that only authenticated operators are able to access the system. Depending on the complexity of your network topology, your CI/CD system may need to access several different networks to deploy code to different environments. The important point to keep in mind is that your CI/CD systems are highly valuable targets, and in many cases, they have a broad degree of access to your other vital systems. Protecting all external access to the servers and tightly controlling the types of internal access allowed will help reduce the risk of your CI/CD system being compromised.

The image given below shows Talend’s recommended architecture where the CI server, SDLC and the Version control system (git) are isolated and secured.

Best Practice 7 –  Maintain Production like environment

It is a best practice to have one environment (QA environment) as close to that of the production environment. This includes infrastructure, operating system, databases, patches, network topology, firewalls and configuration.

Having an environment close to the production environment and validating code changes in this environment helps in ensuring that the integration accurately reflects how the change would behave in production. It would identify mismatches and last-minute surprises which can be effectively eliminated, and code can be released safely to production at any time. Note that the more differences between your production environment and the QA environment, the less chances are that your tests will measure how the code will perform when released. Some differences between QA and production are expected but keeping them manageable and making sure they are well-understood is essential.

Best Practice 8 – Keep Your Continuous Integration process fast

CI/CD pipelines help in driving the changes through automated testing cycles to different environments like test, stage and finally to production. Making the CI/CD pipeline fast is very important for the team to be productive.

If a team has to wait long for the build to finish, then it defeats the whole purpose. Since all the changes must follow this CI/CD process, keeping the pipeline fast and dependable is very important or else it would diminish the purpose. There are multiple aspects to keep the CD/CD process fast. To being with the CI/CD infrastructure must be good enough not only to suit the current need but also to scale out if needed. Also, it is important to visit the test cases in a timely manner to ensure that no test cases are adding any overhead to the system

Best Practice 9 –  CI/CD should be the only way to deploy to Production

Promoting code through the CI/CD pipelines validates each change so that it fits the organization’s standards and doesn’t introduce any bugs to the existing code. Any failures in a CI/CD pipeline are immediately visible and it should stop the further code integration/deployment. This is a gatekeeping mechanism that safeguards the important environments from untrusted code.

To utilize these advantages, it is important to ensure that every change in the production environment goes through only the CI/CD pipeline. The CI/CD pipeline should be the only mechanism by which code enters the production environment. This CI/CD pipeline could be automated or via a manual trigger.

Generally, a team follows all this until a production fix or a show stopper error occurs. When the error is critical, there is a pressure to resolve them quickly. It is recommended that even in such scenarios the fix should be introduced to other environments via the CI/CD pipeline. Putting your fix through the pipeline (or just using the CI/CD system to rollback) will also prevent the next deployment from erasing an ad hoc hotfix that was applied directly to production. The pipeline protects the validity of your deployments regardless of whether this was a regular, planned release, or a fast fix to resolve an ongoing issue. This use of the CI/CD system is yet another reason to work to keep your pipeline fast.

As an example, the image below shows the option to publish the job via studio. It is not recommended to use this approach. The recommended approach is to use the pipeline. The example here shows Jenkins.

Best Practice 10 – Build binaries only once

A primary goal of a CI/CD pipeline is to build confidence in your changes and minimize the chance of unexpected impact. If your pipeline requires building, packaging, or bundling step, that step should be executed only once, and the resulting output should be reused throughout the entire pipeline. This practice helps prevent problems that arise while the code is being compiled or packaged. Also, if you have test cases written and you are building the same code for the different environments this ensures you are replicating the testing effort and time in each environment.

To avoid this problem, CI systems should include a build process as the first step in the pipeline that creates and packages the software in a clean environment/temporary workspace. The resulting artifact should be versioned and uploaded to an artifact storage system to be pulled down by subsequent stages of the pipeline, ensuring that the build does not change as it progresses through the system.

Conclusion

In this blog, we’ve started with CI/CD best practices. Hopefully, it has been useful. My next blog in this series will focus on some more best practice in CI/CD world so keep watching and happy reading. Until next time!

The post Continuous Integration Best Practices – Part 1 appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

The AI mindset: Bridging industry and academic perspectives

SnapLogic - Tue, 10/23/2018 - 15:54

Eventually, all organizations will have to leverage machine learning to stay competitive. Just like with the move to web applications, mobile applications, and the cloud, machine learning will be the norm, not the exception, to a modern technology stack. While there is certainly a lot of hype surrounding artificial intelligence and machine learning technology, there[...] Read the full article here.

The post The AI mindset: Bridging industry and academic perspectives appeared first on SnapLogic.

Categories: ETL

Introduction to the Agile Data Lake

Talend - Thu, 10/18/2018 - 09:41

Let’s be honest, the ‘Data Lake’ is one of the latest buzz-words everyone is talking about. Like many buzzwords, few really know how to explain what it is, what it is supposed to do, and/or how to design and build one.  As pervasive as they appear to be, you may be surprised to learn that Gartner predicts that only 15% of Data Lake projects make it into production.  Forrester predicts that 33% of Enterprises will take their attempted Data Lake projects off life-support.  That’s scary!  Data Lakes are about getting value from enterprise data, and given these statistics, its nirvana appears to be quite elusive.  I’d like to change that and share my thoughts and hopefully providing some guidance for your consideration on how to design, build, and use a successful Data Lake; An Agile Data Lake.  Why agile? Because to be successful, it needs to be.

Ok, to start, let’s look at the Wikipedia definition for what a Data Lake is:

“A data lake is a storage repository that holds a vast amount of raw data in its native format, incorporated as structured, semi-structured, and unstructured data.”

Not bad.  Yet considering we need to get value from a Data Lake this Wikipedia definition is just not quite sufficient. Why? The reason is simple; you can put any data in the lake, but you need to get data out and that means some structure must exist. The real idea of a data lake is to have a single place to store of all enterprise data, ranging from raw data (which implies an exact copy of source system data) through transformed data, which is then used for various business needs including reporting, visualization, analytics, machine learning, data science, and much more.

I like a ‘revised’ definition from Tamara Dull, Principal Evangelist, Amazon Web Services, who says:

“A data lake is a storage repository that holds a vast amount of raw data in its native format, including structured, semi-structured, and unstructured data, where the data structure and requirements are not defined until the data is needed.”

Much better!  Even Agile-like. The reason why this is a better definition is that it incorporates both the prerequisite for data structures and that the stored data would then be used in some fashion, at some point in the future.  From that we can safely expect value and that exploiting an Agile approach is absolutely required.  The data lake therefore includes structured data from relational databases (basic rows and columns), semi-structured data (like CSV, logs, XML, JSON), unstructured data (emails, documents, PDFs) and even binary data (typically images, pictures, audio, & video) thus creating a centralized data store accommodating all forms of data.  The data lake then provides an information platform upon which to serve many business use cases when needed.  It is not enough that data goes into the lake, data must come out too.

And, we want to avoid the ‘Data Swamp’ which is essentially a deteriorated and/or unmanaged data lake that is inaccessible to and/or unusable by its intended users, providing little to no business value to the enterprise.  Are we on the same page so far?  Good.

Data Lakes – In the Beginning

Before we dive deeper, I’d like to share how we got here.  Data Lakes represent an evolution resulting from an explosion of data (volume-variety-velocity), the growth of legacy business applications plus numerous new data sources (IoT, WSL, RSS, Social Media, etc.), and the movement from on-premise to cloud (and hybrid). 

Additionally, business processes have become more complex, new technologies have recently been introduced enhancing business insights and data mining, plus exploring data in new ways like machine learning and data science.  Over the last 30 years we have seen the pioneering of a Data Warehouse (from the likes of Bill Inmon and Ralph Kimball) for business reporting all the way through now to the Agile Data Lake (adapted by Dan Linstedt, yours truly, and a few other brave souls) supporting a wide variety of business use cases, as we’ll see.

To me, Data Lakes represent the result of this dramatic data evolution and should ultimately provide a common foundational, information warehouse architecture that can be deployed on-premise, in the cloud, or a hybrid ecosystem. 

Successful Data Lakes are pattern based, metadata driven (for automation) business data repositories, accounting for data governance and data security (ala GDPR & PII) requirements.  Data in the lake should present coalesced data and aggregations of the “record of truth” ensuring information accuracy (which is quite hard to accomplish unless you know how), and timeliness.  Following an Agile/Scrum methodology, using metadata management, applying data profiling, master data management, and such, I think a Data Lake must represent a ‘Total Quality Management” information system.  Still with me?  Great!

What is a Data Lake for?

Essentially a data lake is used for any data-centric, business use case, downstream of System (Enterprise) Applications, that help drive corporate insights and operational efficiency.  Here are some common examples:

  • Business Information, Systems Integration, & Real Time data processing
  • Reports, Dashboards, & Analytics
  • Business Insights, Data Mining, Machine Learning, & Data Science
  • Customer, Vendor, Product, & Service 360

How do you build an Agile Data Lake? As you can see there are many ways to benefit from a successful Data Lake.  My question to you is, are you considering any of these?  My bet is that you are.  My next questions are; Do you know how to get there?  Are you able to build a Data Lake the RIGHT way and avoid the swamp?  I’ll presume you are reading this to learn more.  Let’s continue…

There are three key principles I believe you must first understand and must accept:

  • ⇒ A PROPERLY implemented Ecosystem, Data Models, Architecture, & Methodologies
  • ⇒ The incorporation of EXCEPTIONAL Data Processing, Governance, & Security
  • ⇒ The deliberate use of Job Design PATTERNS and BEST PRACTICES

A successful Data Lake must also be agile which then becomes a data processing and information delivery mechanism designed to augment business decisions and enhance domain knowledge.  A Data Lake, therefore, must have a managed lifecycle.  This life cycle incorporates 3 key phases:

  1. INGESTION:
    • Extracting raw source data, accumulating (typically written to flat files) in a landing zone or staging area for downstream processing & archival purposes
  2. ADAPTATION:
    • Loading & Transformation of this data into usable formats for further processing and/or use by business users
  3. CONSUMPTION:
    • Data Aggregations (KPI’s, Data-points, or Metrics)
    • Analytics (actuals, predictive, & trends)
    • Machine Learning, Data Mining, & Data Science
    • Operational System Feedback & Outbound Data Feeds
    • Visualizations, & Reporting

The challenge is how to avoid the swamp.  I believe you must use the right architecture, data models, and methodology.  You really must shift away your ‘Legacy’ thinking; adapt and adopt a ‘Modern’ approach.  This is essential.  Don’t fall into the trap of thinking you know what a data lake is and how it works until you consider these critical points.

Ok then, let’s examine then these three phases a bit more.  Data Ingestion is about capturing data, managing it, and getting it ready for subsequent processing.  I think of this like a box crate of data, dumped onto the sandy beach of the lake; a landing zone called a ’Persistent Staging Area’.  Persistent because once it arrives, it stays there; for all practical purposes, once processed downstream, becomes an effective archive (and you don’t have to copy it somewhere else).  This PSA will contain data, text, voice, video, or whatever it is, which accumulates.

You may notice that I am not talking about technology yet.  I will but, let me at least point out that depending upon the technology used for the PSA, you might need to offload this data at some point.  My thinking is that an efficient file storage solution is best suited for this 1st phase.

Data Adaptation is a comprehensive, intelligent coalescence of the data which must adapt organically to survive and provide value.  These adaptations take several forms (we’ll cover them below) yet essentially reside 1st in a raw, lowest level of granulation, data model which then can be further processed, or as I call it, business purposed, for a variety of domain use cases.  The data processing requirements here can be quite involved so I like to automate as much of this as possible.  Automation requires metadata.  Metadata management presumes governance.  And don’t forget security.  We’ll talk about these more shortly.

Data Consumption is not just about business users, it is about business information, the knowledge it supports, and hopefully, the wisdom derived from it.  You may be familiar with the DIKW Pyramid; Data > Information > Knowledge > Wisdom.  I like to insert ‘Understanding’ after ‘Knowledge’ as it leads wisdom.

Data should be treated as a corporate asset and invested as such.  Data then becomes a commodity and allows us to focus on the information, knowledge, understanding, and wisdom derived from it.  Therefore, it is about the data and getting value from it. 

Data Storage Systems: Data Stores

Ok, as we continue to formulate the basis for building a Data Lake, let’s look at how we store data.  There are many ways we do this.  Here’s a review:

  • DATABASE ENGINES:
    • ROW: traditional Relational Database System (RDBMS) (ie: Oracle, MS SQL Server, MySQL, etc)
  • COLUMNAR: relatively unknown; feels like a RDBMS but optimized for Columns  (ie: Snowflake, Presto, Redshift, Infobright, & others)
  • NoSQL – “Not Only SQL”:
    • Non-Relational, eventual consistency storage & retrieval systems (ie: Cassandra, MongoDB, & more)
  • HADOOP:
    • Distributed data processing framework supporting high data Volume, Velocity, & Variety (ie: Cloudera, Hortonworks, MapR, EMR, & HD Insights)
  • GRAPH – “Triple-Store”:
    • Subject-Predicate-Object, index-free ‘triples’; based upon Graph theory (ie: AlegroGraph, & Neo4J)
  • FILE SYSTEMS:
    • Everything else under the sun (ie: ASCII/EBCDIC, CSV, XML, JSON, HTML, AVRO, Parquet)

There are many ways to store our data, and many considerations to make, so let’s simplify our life a bit and call them all ‘Data Stores’, regardless of them being Source, Intermediate, Archive, or Target data storage.  Simply pick the technology for each type of data store as needed.

Data Governance

What is Data Governance?  Clearly another industry enigma.  Again, Wikipedia to the rescue:

Data Governance is a defined process that an organization follows to ensure that high quality data exists throughout the complete lifecycle.”

Does that help?  Not really?  I didn’t think so.  The real idea of data governance is to affirm data as a corporate asset, invest & manage it formally throughout the enterprise, so it can be trusted for accountable & reliable decision making.  To achieve these lofty goals, it is essential to appreciate Source through Target lineage.  Management of this lineage is a key part of Data Governance and should be well defined and deliberately managed.  Separated into 3 areas, lineage is defined as:

  • ⇒ Schematic Lineage maintains the metadata about the data structures
  • ⇒ Semantic Lineage maintains the metadata about the meaning of data
  • ⇒ Data Lineage maintains the metadata of where data originates & its auditability as it changes allowing ‘current’ & ‘back-in-time’ queries

It is fair to say that a proper, in-depth discussion on data governance, metadata management, data preparation, data stewardship, and data glossaries are essential, but if I did that here we’d never get to the good stuff.  Perhaps another blog?  Ok, but later….

Data Security

Data Lakes must also ensure that personal data (GDPR & PII) is secure and can be removed (disabled) or updated upon request.  Securing data requires access policies, policy enforcement, encryption, and record maintenance techniques.  In fact, all corporate data assets need these features which should be a cornerstone of any Data Lake implementation.  There are three states of data to consider here:

  • ⇒ DATA AT REST in some data store, ready for use throughout the data lake life cycle
  • ⇒ DATA IN FLIGHT as it moves through the data lake life cycle itself
  • ⇒ DATA IN USE perhaps the most critical, at the user-facing elements of the data lake life cycle

Talend works with several technologies offering data security features.  In particular, ‘Protegrity Cloud Security’ provides these capabilities using Talend specific components and integrated features well suited for building an Agile Data Lake.  Please feel free to read “BUILDING A SECURE CLOUD DATA LAKE WITH AWS, PROTEGRITY AND TALEND” for more details.  We are working together with some of our largest customers using this valuable solution.

Agile Data Lake Technology Options

Processing data into and out of a data lake requires technology, (hardware/software) to implement.  Grappling with the many, many options can be daunting.  It is so easy to take these for granted, picking anything that sounds good.  It’s only after or until better understanding the data involved, systems chosen, and development efforts does one find that the wrong choice has been made.  Isn’t this the definition of a data swamp?  How do we avoid this?

A successful Data Lake must incorporate a pliable architecture, data model, and methodology.  We’ve been talking about that already.  But picking the right ‘technology’ is more about the business data requirements and expected use cases.  I have some good news here.  You can de-couple the data lake designs from the technology stack.  To illustrate this, here is a ‘Marketecture’ diagram of depicting the many different technology options crossing through the agile data lake architecture.

As shown above, there are many popular technologies available, and you can choose different capabilities to suit each phase in the data lake life cycle.  For those who follow my blogs you already know I do have a soft spot for Data Vault.  Since I’ve detailed this approach before, let me simply point you to some interesting links:

You should know that Dan Linstedt created this approach and has developed considerable content you may find interesting.  I recommend these:

I hope you find all this content helpful.  Yes, it is a lot to ingest, digest, and understand (Hey, that sounds like a data lake), but take the time.  If you are serious about building and using a successful data lake you need this information.

The Agile Data Lake Life Cycle

Ok, whew – a lot of information already and we are not quite done.  I have mentioned that a data lake has a life cycle.  A successful Agile Data Lake Life Cycle incorporates the 3 phases I’ve described above, data stores, data governance, data security, metadata management (lineage), and of course: ‘Business Rules’.  Notice that what we want to do is de-couple ‘Hard’ business rules (that transform physical data in some way) from ‘Soft’ business rules (that adjust result sets based upon adapted queries).  This separation contributes to the life cycle being agile. 

Think about it, if you push physical data transformations upstream then when the inevitable changes occur, the impact is less to everything downstream.  On the flip side, when the dynamics of business impose new criteria, changing a SQL ‘where’ clauses downstream will have less impact on data models it pulls from.  The Business Vault provides this insulation from the Raw Data Vault as it can be reconstituted when radical changes occur.

Additionally, a Data Lake is not a Data Warehouse but in fact, encapsulates one as a use case.  This is a critical takeaway from this blog.  Taking this further, we are not creating ‘Data Marts’ anymore, we want ‘Information Marts’.   Did you review the DIKW Pyramid link I mentioned above?  Data should, of course, be considered and treated as a business asset.  Yet simultaneously, data is now a commodity leading us to information, knowledge, and hopefully: wisdom.

This diagram walks through the Agile Data Lake Life Cycle from Source to Target data stores.  Study this.  Understand this.  You may be glad you did.  Ok, let me finish to say that to be agile a data lake must:

  • BE ADAPTABLE
    • Data Models should be additive without impact to existing model when new sources appear
  • BE INSERT ONLY
    • Especially for Big Data technologies where Updates & Deletes are expensive
  • PROVIDE SCALABLE OPTIONS
    • Hybrid infrastructures can offer extensive capabilities
  • ALLOW FOR AUTOMATION
    • Metadata, in many aspects, can drive the automation of data movement
  • PROVIDE AUDITABLE, HISTORICAL DATA
    • A key aspect of Data Lineage

And finally, consider that STAR Schemas are, and always were, designed to be ‘Information Delivery Mechanisms’, a misunderstanding some in the industry has fostered for many years.  For many years we have all built Data Warehouses using STAR schemas to deliver reporting and business insights.  These efforts all too often resulted in raw data storage of the data warehouse in rigid data structures, requiring heavy data cleansing, and frankly high impact when upstream systems are changed or added. 

The cost in resources and budget has been a cornerstone to many delays, failed projects, and inaccurate results.  This is a legacy mentality and I believe it is time to shift our thinking to a more modern approach.  The Agile Data Lake is that new way of thinking.  STAR schemas do not go away, but their role has shifted downstream, where they belong and always intended for.

Conclusion

This is just the beginning, yet I hope this blog post gets you thinking about all the possibilities now.

As a versatile technology and coupled with a sound architecture, pliable data models, strong methodologies, thoughtful job design patterns, and best practices, Talend can deliver cost-effective, process efficient and highly productive data management solutions.  Incorporate all of this as I’ve shown above and not only will you create an Agile Data Lake, but you will avoid the SWAMP!

Till next time…

The post Introduction to the Agile Data Lake appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Why the cloud can save big data

SnapLogic - Tue, 10/16/2018 - 14:36

This article originally appeared on computable.nl. Many companies like to use big data to make better decisions, strengthen customer relationships, and increase efficiency within the company. They are confronted with a dizzying array of technologies – from open source projects to commercial software – that can help to get a better grip on the large[...] Read the full article here.

The post Why the cloud can save big data appeared first on SnapLogic.

Categories: ETL

Introducing Talend Data Catalog: Creating a Single Source of Trust

Talend - Tue, 10/16/2018 - 06:07

Talend Fall ’18 is here! We’ve released a big update to the Talend platform this time around including support for APIs, as well as new big data and serverless capabilities. You will see blogs from my colleagues to highlight those major new product and features introductions. On my side, I’ve been working passionately to introduce Talend Data Catalog, which I believe has the potential to change the way data is consumed and managed within our enterprise. Our goal with this launch is to help our customers deliver insight-ready data at scale so they can make better and faster decisions, all while spending less time looking for data or making decisions with incomplete data.

You Can’t Be Data Driven without a Data Catalog

Before we jump into features, let’s look at why you need a data catalog. Remember the early days of the Internet? Suddenly, it became so easy and cheap to create content and publish it to anyone that everybody actually did it. Soon enough, that created a data sprawl, and the challenge was not any more to create content but to find it. After two decades we know that winners in the web economy are those that created a single point of access to content in their category: Google, YouTube, Baidu, Amazon, Wikipedia.

Now, we are faced with a similar data sprawl in out data-driven economy. IDC research has found that today data professionals are spending 81% of their time searching, preparing, and protecting data with little time left to turn it into business outcomes. It has become crucial that organizations establish this same single source of access to their data to be in the winner’s circle.

Although technology can help to fix the issue, and I’ll come back on it later in the article, among these, enterprises need to set up a discipline to organize their data at scale, and this discipline is called data governance. But traditional data governance must be re-invented with this data sprawl:  according to Gartner, “through 2022, only 20% of organizations investing in information will succeed in scaling governance for digital business.” Given the sheer number of companies that are awash in data, that percentage is just too small.

Modern data governance is not only about minimizing data risks but also about maximizing data usage, which is why traditional authoritative data governance approaches are not enough. There is a need for a more agile, bottom-up approach. That strategy starts with the raw data, links it to its business context so that it becomes meaningful, takes control of its data quality and security, and fully organizes it for massive consumption.

Empowering this new discipline is the promise of data catalogs, leveraging modern technologies like smart semantics and machine learning to organize data at scale and turns data governance into a team sport by engaging anyone for social curation. 

With the newly introduced Talend Data Catalog, companies can organize their data at scale to make data accessible like never before and address challenges head-on. By empowering organizations to create a single source of trusted data, it’s a win for both the business with the ability to find the right data, as well as the CIO and CDO who can now control data better to improve data governance. Now let’s dive into some details on what the Talend Data Catalog is.

Intelligently discover your data

Data catalogs are a perfect fit for companies that modernized their data infrastructures with data lakes or cloud-based data warehouses, where thousands of raw data items can reside and can be accessed at scale. The catalog acts as the fish finder for that data lake, leveraging crawlers across different file systems, traditional, Hadoop, or cloud, and across typical file format. Then automatically extracts metadata and profiling information, for referencing, change management classification and accessibility.

Not only can it bring all of those metadata together in a single place, but it can also automatically draw the links between datasets and connect them to a business glossary. In a nutshell, this allows businesses to:

  • Automate the data inventory
  • Leverage smart semantics for auto-profiling, relationships discovery and classification
  • Document and drive usage now that the data has been enriched and becomes more meaningful

The goal of the data catalog is to unlock data from the application where they reside.

Orchestrate data curation

Once the metadata has been automatically harvested in a single place, data governance can be orchestrated in a much more efficient way. Talend Data Catalog allows businesses to define the critical data elements in its business glossary and assign data owners for those critical data elements. The data catalog then relates those critical data elements to the data points that refer it across the information system.

Now data is in control and data owners can make sure that their data is properly documented and protected. Comments, warnings, or validation can be crowdsourced from any business user for collaborative, bottom-up governance. Finally, the data catalog draws end-to-end data lineage and manages version control. It guarantees accuracy and provides a complete view of the information chain, which are both critical for data governance and data compliance.

Easy search-based access to trusted data

Talend Data Catalog makes it possible for businesses to locate, understand, use, and share their trusted data faster by searching and verifying data’s validity before sharing with peers. Its collaborative user experience enables anyone to contribute metadata or business glossary information.

Data governance is most often associated with control. A discipline that allows businesses to centrally collect data, process, and consume under certain rules and policies. The beauty of Talend Data Catalog is that not only does it control data but liberates it for consumption as well. This allows data professionals to find, understand, and share data ten times faster. Now data engineers, scientists, analysts, or even developers can spend their time on extracting value from those data sets rather than searching for them or recreating them – removing the risk of your data lake turning into a data swamp.

A recently published IDC report, “Data Intelligence Software for Data Governance,” advocates the benefits of modern data governance and positions the Data Catalog as the cornerstone of what they define as Data Intelligence Software. In the report, IDC calls it a “technology that supports enablement through governance is called data intelligence software and is delivered in metadata management, data lineage, data catalog, business glossary, data profiling, mastering, and stewardship software.”

For more information, check out the full capabilities of the Talend Data Catalog here.

The post Introducing Talend Data Catalog: Creating a Single Source of Trust appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Astrazeneca: Building the Data Platform of the Future

Talend - Mon, 10/15/2018 - 06:28

AstraZeneca plc is a global, science-led biopharmaceutical company that is the world’s seventh-largest pharmaceutical business, with operations in more than 100 countries. The company focuses on the discovery, development, and commercialization of prescription medicines, which are used by millions of patients worldwide.

It’s one of the few companies to span the entire lifecycle of a medicine; from research and development to manufacturing and supply, and the global commercialization of primary care and specialty care medicines.

Beginning in 2013, AstraZeneca was faced with industry disruption and competitive pressure. For business sustainability and growth, AstraZeneca needed to change their product and portfolio strategy.

As the starting point, they needed to transform their core IT and finance functions. Data is at the heart of these transformations. They had a number of IT-related challenges, including inflexible and non-scalable infrastructure; data silos and diverse data models and file sizes within the organization; a lack of enterprise data governance; and infrastructure over-provisioning for required performance.

The company had grown substantially, including through mergers and acquisitions, and had data dispersed throughout the organization in a variety of systems. Additionally, financial data volume fluctuates depending on where they are in the financial cycle and peaks at month-end, quarters or financial year end are common.

In addition to causing inconsistencies in reporting, silos of information prevented the company and its Science and Enabling Unit division from finding insights hiding in unconnected data sources.

For transforming their IT and finance function and accelerating financial reporting, AstraZeneca needed to put in place a modern architecture that could enable a single source of the truth. As part of its solution, AstraZeneca began a move to the cloud, specifically Amazon Web Services (AWS), where it could build a data lake to hold data from a range of source systems, The potential benefits of a cloud-based solution included increased innovation and accelerated time to market, lower costs, and simplified systems.

But the AWS data lake was only part of the answer. The company needed a way to capture the data, and that’s where solutions such as Talend Big Data and Talend Data Quality come into play. AstraZeneca selected Talend for its AWS connectivity, flexibility, and licensing model, and valued its ability to scale rapidly without incurring extra costs.

The Talend technologies are responsible for lifting, shifting, transforming, and delivering data into the cloud, extracting from multiple sources, and then pushing that data into Amazon S3.

Their IT and Business Transformation initiative was successful and has paved the way for Business Transformation initiatives across five business units and they are leveraging this modern data platform for driving new business opportunities.

Attend this session at Talend Connect UK 2018 to learn more about how AstraZeneca transformed its IT and finance functions by developing an event-driven, scalable data-platform to support massive month-end peak activity, leading to financial reporting in half the time and half the cost.

The post Astrazeneca: Building the Data Platform of the Future appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Elsevier: How to Gain Data Agility in the Cloud

Talend - Fri, 10/12/2018 - 12:50

Presenting at Talend Connect London 2018 is Reed Elsevier (part of RELX Group), a $7 billion data and analytics company with 31,000 employees, serving scientists, lawyers, doctors, and insurance companies among its many clients. The company helps scientists make discoveries, lawyers win cases, doctors save lives, insurance companies offer customers lower prices, and save taxpayers money by preventing fraud.

Standardizing business practices for successful growth

As the business grew over the years, different parts of the organization began buying and deploying integration tools, which created management challenges for central IT. It was a “shadow IT” situation, where individual business departments were implementing their own integrations with their own different tools.

With lack of standardization, integration was handled separately between different units, which made it more difficult for different components of the enterprise to share data. Central IT wanted to bring order to the process and deploy a system that was effective at meeting the company’s needs as well as scalable to keep pace with growth.

Moving to the cloud

One of the essential requirements was that any new solution be a cloud-based offering. Elsevier a few years ago became a “cloud first” company, mandating that any new IT services be delivered via the cloud and nothing be hosted on-premises. It also adopted agile methodologies and a continuous deployment approach, to become as nimble as possible when bringing new products or releases to market.

Elsevier selected Talend as a solution and began using it in 2016. Among the vital selection factors were platform flexibility, alignment with the company’s existing infrastructure, and its ability to generate Java code as output and support microservices and containers.

In their Talend Connect session, Delivering Agile integration platforms, Elsevier will discuss how it got up and running rapidly with Talend despite having a diverse development environment. And, how it’s using Talend, along with Amazon Web Services, to build a data platform for transforming raw data into insight at scale across the business. You’ll learn how Elsevier created a dynamic platform using containers, serverless data processing and continuous integration/continuous development to reach a level of agility and speed.

Agility is among the most significant benefits of their approach using Talend. Elsevier spins up servers as needed and enables groups to independently develop integrations on a common platform without central IT being a bottleneck. Since building the platform, internal demand has far surpassed the company’s expectations—as it is delivering cost savings and insight at a whole new level.

Attend this session to learn more about how you can transform your integration environment.

 

The post Elsevier: How to Gain Data Agility in the Cloud appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL
Syndicate content