AdoptOS

Assistance with Open Source adoption

Open Source News

Five questions to ask about data lakes

SnapLogic - Tue, 10/09/2018 - 18:15

Data is increasingly being recognized as the corporate currency of the digital age. Companies want to leverage data to achieve deeper insights leading to competitive advantage over their peers. According to IDC projections, total worldwide data will surge to 163 zettabytes (ZB) by 2025, an increase of 10x the amount of what exists today. The[...] Read the full article here.

The post Five questions to ask about data lakes appeared first on SnapLogic.

Categories: ETL

5 Questions to Ask When Building a Cloud Data Lake Strategy

Talend - Tue, 10/09/2018 - 15:41

In my last blog post, I shared some thoughts on the common pitfalls when building a data lake. As the movement to the cloud gets more and more common, I’d like to further discuss some of the best practices when building a cloud data lake strategy. When going beyond the scope of integration tools or platforms for your cloud data lake, here are 5 questions to ask, that can be used as a checklist:

1. Does your Cloud Data Lake strategy include a Cloud Data Warehouse?

As many differences as there are between the two, people often times compare the two types of technology approaches. Data warehouses being the centralization of structured data, and Data Lakes often times being the holy grail of all types of data. (You can read more about the two approaches here.)

Not to confuse the two, as these technology approaches should actually be brought together. You will need a data lake to accommodate all types of data that your business deal with today, make it structured, semi-structured or unstructured, on-premise or in the cloud, or those newer types of data such as IoT data. The data lake often time has a landing zone and staging zone for raw data – data at this stage are not yet consumable, but you may want to keep them for future discovery or data science projects. On the other hand, a cloud data warehouse will be in the picture after data is cleansed, mapped and transformed, so that it is more consumable for business analysts to access and make the use of data for reporting or other analytical use. Data at this stage is often time highly processed to adjust to the data warehouse.

If your approach currently only works with a cloud data warehouse, then often time you are losing raw and some formats of data already, it is not so helpful for any prescriptive or advanced analytics projects, or machine learning and AI initiatives as some meanings within the data is already lost. Vice versa, if you don’t have a data warehouse alongside with your data lake strategy, you will end up with a data swamp where all data is kept with no structure, and not consumable by analysts.

From the integration perspective, make sure your integration tool work with both data lake and data warehouse technologies, which will lead us to the next question. 

Download Data Lakes: Purposes, Practices, Patterns, and Platforms now.
Download Now

2. Does your integration tool have ETL & ELT?

As much as you may know about ETL in your current on-premises data warehouse, moving it to the cloud is a different story, not to mention in a cloud data lake context. Where and how data is processed really depends on what you need for your business.

Similar to what we described in the first question, sometimes you need to keep more of the raw nature of the data, and other times you need more processing. This would require your integration tool to cope with both ETL and ELT capabilities, where the data transformation can be handled either before the data is loaded to your final target, e.g. a cloud data warehouse, or after data is landed there. ELT is more often leveraged when the speed of data ingestion is key to your project, or when you want to keep more intel about your data. Typically, cloud data lakes have a raw data store, then a refined (or transformed) data store. Data scientists, for example, prefer to access the raw data, whereas business users would like the normalized data for business intelligence.

Another use of ELT refers to the massive parallel processing capabilities coming with big data technologies such as Spark and Flink. If your use case requires such a strong processing power, then ELT is a better choice where the processing has more scalability.

3. Can your cloud data lake handle both simple ETL tasks and complex big data ones?

This may look like an obvious question but when you ask about this question, put yourself in the users’ shoes and really think through if your choice of tool can meet both requirements.

Not all of your data lake usage will be complex ones that require advanced processing and transformation, many of them can be simple activities such as ingesting new data into the data lake. Often times, the tasks go beyond the data engineering or IT team as well. So ideally the tool of your choice should be able to handle simple tasks fast and easy, but also can scale to the complexity to meet the requirements of advanced use cases.  Building a data lake strategy that can cope with both can help you make your data lake more consumable and practical for various types of users for different purposes.

4. How about batch and streaming needs?

You may think your current architecture and technology stack is good enough, and your business is not really in the Netflix business where streaming is a necessity. Get it? Well think again.

Streaming data has become a part of our everyday lives whether you realize it or not. The “Me” culture has put everything at the moment of now. If your business is on social media, you are in streaming. If IoT and sensor is the next growth market for your business, you are in streaming. If you have a website for customer interaction, you are in streaming. In IDC’s 2018 Data Integration and Integrity End User Survey, 93% of the respondents indicate the plan to use streaming technology by 2020. Real-time and streaming analytics have become a must for modern businesses today to create that competitive edge. So, this naturally raises the questions: can your data lake handle both your batch and streaming needs? Do you have the technology and people to work with streaming, which is fundamentally different from typical batch needs?

Streaming data is particularly challenging to handle because it is continuously generated by an array of sources and devices as well as being delivered in a wide variety of formats.

One prime example of just how complicated streaming data can be comes from the Internet of Things (IoT). With IoT devices, the data is always on; there is no start and no stop, it just keeps flowing. A typical batch processing approach doesn’t work with IoT data because of the continuous stream and the variety of data types it encompasses.

So make sure your data lake strategy and data integration layer can be agile enough to work with both use cases.

You can find more tips on streaming data, here.

5. Can your data lake strategy help cultivate a collaborative culture?

Last but not least, collaboration.

It may take one person to implement the technology, but it will take a whole village to implement it successfully. The only way to make sure your data lake is a success is to have people use it, improving the workflow one way or another.

In a smaller scope, the workflow in your data lake should be able to be reused and leveraged among data engineers. Less recreation will be needed, and operationalization can be much faster. In a bigger scope, the data lake approach can help improve the collaboration between IT and business teams. For example, your business teams are the experts of their data and they know the meaning and the context of data better than anyone else. Data quality can be much improved if the business team can work on the data for business rule transformations, while IT still governs that activity. Defining such a line with governance in place is a delicate work and no easy task. But you may think through your data lake approach, whether it’s governed but open at the same time to encourage not only final consume /usage of the data, but the improvement of data quality in the process, and be recycled to be available to a broader organization.

To summarize, there we go the 5 questions I would recommend asking when thinking about building a cloud data lake strategy. By no means are these the only questions you should think, but hopefully it initiates some thinking outside of your typical technical checklist. 

The post 5 Questions to Ask When Building a Cloud Data Lake Strategy appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

How to Implement a Job Metadata Framework using Talend

Talend - Tue, 10/09/2018 - 14:39

Today, data integration projects are not just about moving data from point A to point B, there is much more to it. The ever-growing volumes of data, the speed at which the data changes presents a lot of challenges in managing the end-to-end data integration process. In order to address these challenges, it is paramount to track the data-journey from source to target in terms of start and end timestamps, job status, business area, subject area, and the individuals responsible for a specific job. In other words, metadata is becoming a major player in data workflows. In this blog, I want to review how to implement a job metadata framework using Talend. Let’s get started!

Metadata Framework: What You Need to Know

The centralized management and monitoring of this job metadata are crucial to data management teams. An efficient and flexible job metadata framework architecture requires a number of things. Namely, a metadata-driven model and job metadata.

A typical Talend Data Integration job performs the following tasks for extracting the data from source systems and loading them into target systems.

  1. Extracting data from source systems
  2. Transforming the data involves:
    • Cleansing source attributes
    • Applying business rules
    • Data Quality
    • Filtering, Sorting, and Deduplication
    • Data aggregations
  3. Loading the data into a target systems
  4. Monitoring, Logging, and Tracking the ETL process

Figure 1: ETL process

Over the past few years, the job metadata has evolved to become an essential component of any data integration project. What happens when you don’t have job metadata in your data integration jobs? It may lead to incorrect ETL statistics and logging as well as difficult to handle errors occurred during the data integration process. A successful Talend Data Integration project depends on how well the job metadata framework is integrated with the enterprise data management process.

Job Metadata Framework

The job metadata framework is a meta-data driven model that integrates well with Talend product suite. Talend provides a set of components for capturing the statistics and logging information during the flight of the data integration process.

Remember, the primary objective of this blog is to provide an efficient way to manage the ETL operations with a customizable framework. The framework includes the Job management data model and the Talend components that support the framework.

Figure 2: Job metadata model

Primarily, the Job Metadata Framework model includes:

  • Job Master
  • Job Run Details
  • Job Run Log
  • File Tracker
  • Database High Water Mark Tracker for extracting the incremental changes

This framework is designed to allow the production support to monitor the job cycle refresh and look for the issues relating to job failure and any discrepancies while processing the data loads. Let’s go through each of piece of the framework step-by-step.

Talend Jobs

Talend_Jobs is a Job Master Repository table that manages the inventory of all the jobs in the Data Integration domain.

Attribute

Description

JobID

Unique Identifier to identify a specific job

JobName

Job Name is the name of the job as per the naming convention (<type>_<subject area>_<table_name>_<target_destination>

BusinessAreaName

Business Unit / Department or Application Area

JobAuthorDomainID

Job author Information

Notes

Additional Information related to the job

LastUpdateDate

The last updated date

Talend Job Run Details

Talend_Job_Run_Details registers every run of a job and its sub jobs with statistics and run details such as job status, start time, end time, and total duration of main job and sub jobs.

Attribute

Description

ID

Unique Identifier to identify a specific job run

BusinessAreaName

Business Unit / Department or Application Area

JobAuthorDomainID

Job author Information

JobID

Unique Identifier to identify a specific job

JobName

Job Name is the name of the job as per the naming convention (<type>_<subject area>_<table_name>_<target_destination>

SubJobID

Unique Identifier to identify a specific sub job

SubJobName

Sub Job Name is the name of the sub job as per the naming convention (<type>_<subject area>_<table_name>_<target_destination>

JobStartDate

Main Job Start Timestamp

JobEndDate

Main Job End Timestamp

JobRunTimeMinutes

Main Job total job execution duration

SubJobStartDate

Sub Job Start Timestamp

SubJobEndDate

Sub Job End Timestamp

SubJobRunTimeMinutes

Sub Job total job execution duration

SubJobStatus

Sub Job Status (Pending / Complete)

JobStatus

Main Job Status (Pending / Complete)

LastUpdateDate

The last updated date

Talend Job Run Log

Talend_Job_Run_Log logs all the errors occurred during particular job execution. Talend_Job_Run_Log extracts the details from the Talend components specially designed for catching logs (tLogCatcher) and statistics (tStatCacher).

Figure 3: Error logging and Statistics

The tLogCatcher component in Talend operates as a log function triggered during the process by one of these components: Java exceptions, tDie or tWarn. In order catch exceptions coming from the job, tCatch function needs to be enabled on all the components.

The tStatCatcher component gathers the job processing metadata at the job level.

Attribute

Description

runID

Unique Identifier to identify a specific job run

JobID

Unique Identifier to identify a specific job

Moment

The time when the message is caught

Pid

The Process ID of the Job

parent_pid

The Parent process ID

root_pid

The root process ID

system_pid

The system process ID

project

The name of the project

Job

The name of the Job

job_repository_id

The ID of the Job file stored in the repository

job_version

The version of the current Job

context

The Name of the current context

priority

The priority sequence

Origin

The name of the component if any

message_type

Begin or End

message

The error message generated by the component when an error occurs. This is an After variable. This variable functions only if the Die on error checkbox is cleared.

Code

 

duration

Time for the execution of a Job or a component with the tStatCaher Statistics check box selected

Count

Record counts

Reference

Job references

Thresholds

Log thresholds for managing error handling workflows

Talend High Water Marker Tracker

Talend_HWM_Tracker helps in processing delta and incremental changes of a particular table. The High Water Tracker is helpful when the “Change Data Capture” is not enabled and the changes are extracted based on specific conditions such as “last_updated_date_time” or ‘revision_date_time.” In some cases, the High Water Mark relates to the highest sequence number when the records are processed based on the sequence number.

 Attribute

Description

Id

Unique Identifier to identify a specific source table

jobID

Unique Identifier to identify a specific job

job_name

The name of the Job

table_name

The name of the source table

environment

The source table environment

database_type

The source table database type

hwm_datetime

High Water Field (Datetime)

hwm_integer

High Water Field (Number)

hwm_Sql

High Water SQL Statement

Talend File Tracker

Talend_File_Tracker registers all the transactions related to file processing. The transaction details include source file location, destination location, file name pattern, file name suffix, and the name of the last file processed.

Attribute

Description

Id

Unique Identifier to identify a specific source file

jobID

Unique Identifier to identify a specific job

job_name

The name of the Job

environment

The file server environment

file_name_pattern

The file name pattern

file_input_location

The source file location

file_destination_location

The target file location

file_suffix

The file suffix

latest_file_name

The name of the last file processed for a specific file

override_flag

The override flag to re-process the file with the same name

update_datetime

The last updated date

Conclusion

This brings to the end of the implementing Job metadata framework using Talend. The following are key takeaways from this blog:

  1. The need and the importance of Job metadata framework
  2. The data model to support the framework
  3. The customizable data model to support different types of job patterns.

As always – let me know if you have any questions below and happy connecting!

The post How to Implement a Job Metadata Framework using Talend appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Giving Back: Chase The Music

CiviCRM - Tue, 10/09/2018 - 13:12

Chase the Music gives children battling critical conditions love, hope, strength and joy. We do this with original music - composed and performed just for them.

The impact of Chase the Music programs begins with the child. The children (that we write and perform for) are undergoing situations that no child should.

Categories: CRM

Joomla 3.8.13 Release

Joomla! - Tue, 10/09/2018 - 08:45

Joomla 3.8.13 is now available. This is a security release for the 3.x series of Joomla which addresses 5 security vulnerabilities.

Categories: CMS

Liferay Screens meets React Native, the sequel

Liferay - Mon, 10/08/2018 - 11:01

First of all, for those of you who don't know about Liferay Screens. Liferay Screens is a component library based on components called Screenlets. A screenlet is a visual component that you insert into your native app to leverage Liferay’s content and services, allowing you to create complex native applications for iOS and Android very fast. Awesome, isn’t it?

BUT, Do you need to create the SAME application for iOS and Android, with the SAME features twice? Ok, with screenlets it does not take too much time because most of the boring logic is encapsulated inside the screenlet and you only need to connect the dots. But it could be fantastic to have only one project and share the code of the two platforms.

How can we make this possible? Have you heard about React Native?

Goals

As you may know, React Native is a framework that allows you to create native applications (Android and iOS) in javascript using React. This avoids the necessity of having to maintain two different codebases, one per platform. It’s based on components, so the screenlets concept suits very well in React.

Long time ago, when ReactNative was released, we made a first proof of concept with some of the screenlets available at that moment. Now, we have came back to this idea and we have made another proof of concept. This one will feature all our brand new and more complex screenlets and, yes, Android is supported too.

With this prototype we aim to provide a solution to make mobile apps development faster (yeah, even more!) with React Native, so we could use the screenlets the same way you would use any react native component, like a Button component. Great! Do you want to see how it works? Take a look of the next video, it shows you how to use our library in React Native.

As you can see in the video, the use of screenlets from React Native are very easy. You only have to instantiate the screenlet that you want to use, give it a style with height and width because otherwise the screenlet will not show; and if you consider it appropriate handle the events that the screenlet will send.

To handle an event you have to specify a callback function that manage the mentioned event. E.g., in LoginScreenlet you can handle the event onLoginSuccess to handle when the user log in correctly.

Of course, the attributes (known as props in React) of the screenlets depends on the screenlet that will use, so some screenlets will have required attributes, e.g., the UserPortraitScreenlet needs the userId attribute.
To use all of this functionality in your react project, you have to configure your project following the steps of this video. Also, in the project’s README you can find a description of main steps to configure your react native project.

What is the status of the project?

For now this is a prototype. Even so,  ALL screenlets are available in React Native. In total, we have 21 screenlets in Android and 22 in iOS (the fileDisplayScreenlet is only available from iOS). To play with them, we recommend use the most common screenlets, like ImageGalleryScreenlet which show an image gallery, the UserPortraitScreenlet, the CommentListScreenlet to show a comments list of an asset and, of course, the LoginScreenlet, but you can use whatever you want.
So you can explore and tinker with them. Here you have the project.

How it works ?

We don’t want to bore you with technical details. Basically explained, we made a bridge, we built one side of the bridge in the native part, and the other side in the React Native part so it allows communication between them and render the screenlets.

What now?

Well, now it depends on you. You have the project to play with. We are open to suggestions and feedback. Honestly, we are very happy with the result for now.

Thanks for reading.

Luismi.

Luis Miguel Barco 2018-10-08T16:01:00Z
Categories: CMS, ECM

6 tips for online shops on how to use an email marketing tool correctly

PrestaShop - Mon, 10/08/2018 - 04:39
Some online shops are of the opinion that email marketing tools mean too much effort and too little profit. Both can easily be refuted.
Categories: E-commerce

Listing out context variables 

Liferay - Mon, 10/08/2018 - 04:21
What's Context Contributor?

While developing a theme I was thinking how can I know all the variables that I can access in my free marker templates? Won't it be great if I can write a utility program which can list out all the variables and objects that can be accessed from our theme or other templates files? Context Contributor concept can be used for this purpose. Actually using Context Contributor we can write a piece of code to inject contextual information which can be reused in various template files. If you are not aware of context contributor please visit my article http://proliferay.com/context-contributor-concept-in-liferay-7dxp/ . 

How to create context contributor?

Using the Liferay IDE we can create the project structure. 

 

The Code: 

Our context contributor class has to implements TemplateContextContributor interface and we need to implement the below method. 

    public void prepare(Map<String, Object> contextObjects, HttpServletRequest request) {
    //The fun part is here 
    }

If we see the above code, the first parameter contextObjectcts is the map which contains all the contextual information as key-value pairs. We can iterate all the items of the map and write it to a file. Here is the complete code of the method.  This method writes a file in my D drive with a file name all-variables.html.Of course, you can change it the way you want. 

@Override
    public void prepare(
        Map<String, Object> contextObjects, HttpServletRequest request) {
        PrintWriter writer;
        try {
            writer = new PrintWriter("D:\\all-variables.html", "UTF-8");
            StringBuffer stb = new StringBuffer();
            stb.append("<html>");
            stb.append("<body>");
            stb.append("<table border=\"1\">");
            stb.append("<tr>");
            stb.append("<th>Variable Name</th>");
            stb.append("<th>Variable Value</th>");
            stb.append("</tr>");

            for (Map.Entry<String, Object> entry : contextObjects.entrySet()) {
                stb.append("<tr>");
                stb.append("<td>"+entry.getKey()+"</td>");
                stb.append("<td>"+entry.getValue()+"</td>");
                
               // System.out.println(entry.getKey() +" = =  "+ entry.getValue());
                stb.append("</tr>");
            }
            
            stb.append("</table>");
            stb.append("</body>");
            stb.append("</html>");
            
            writer.println(stb.toString());
            writer.close();
        } catch (FileNotFoundException | UnsupportedEncodingException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }    
    }

You just deploy the module and access any pages. This code will be executed and your file is ready. Now you have all the contextual information which can be used the theme development as well as writing ADT. 

The output of the code:

 

Have fun... Hamidul Islam 2018-10-08T09:21:00Z
Categories: CMS, ECM

Contact Management online training for New CiviCRM Users - October 10th

CiviCRM - Sun, 10/07/2018 - 08:55

If you are new to CiviCRM, you should participate in this 2-hour online training taught by Cividesk on Wednesday, October 10th at 9 am MT/ 10 am CT/ 11 am ET.

The Fundamentals of Contact Management focuses on best practices for managing contacts and the relationships between them. We also cover searching, activities, emailing, merging duplicates, groups and tags, and creating an activity report. 

Categories: CRM

Why I'm Flying South to LSNA2018

Liferay - Sat, 10/06/2018 - 20:10
or, How to blow a Saturday evening writing a blog post just because  the kids don't want to hang out with you

 

Here are the 5 reasons I am flying down tomorrow evening.

I've been to LSNA twice before, in 2015 and 2016. I remember the energy and the ambience. Some of the topics deserve waaaay more than the 30 minutes or hour that is allocated to them, but then the presentations are designed to leave you with just enough to take a deep dive on whichever topic interests you, and on that front, they absolutely deliver. So, here's to more of that.

Unconference. I've never attended one of these before, but the prospect has me interested. I mean, it looks like the attendees get to carve out the day's agenda. I'll be bringing my list of topics fwiw. Something tells me there'll be enough knowledge sharing to go all around. 

Speed consulting. I hope I get to reserve a spot. I have a half-baked design approach around a SAML-authentication requirement using the SAML connector plugin - just a lot of holes that need plugging. Hoping a 20-minute session will help clear things up for me.

Agenda topics: As always, great spread! Here's the top 5 items on my radar at this time:

  • Search (Oct 9, 10:35)
  • Securing your APIs for Headless Use (Oct 10, 11:00)
  • Extending Liferay with OSGI (Oct 9, 4:30)
  • Liferay Analytics Cloud (Oct 10, 10:20)
  • Building Amazing Themes (Oct 9, 3:50)

Food. I have on my to-do list to eat a bowl of authentic étouffée. I will have to seek out the best place for this.

Javeed Chida 2018-10-07T01:10:00Z
Categories: CMS, ECM

Member Only Event - CiviCRM Extension

CiviCRM - Fri, 10/05/2018 - 10:03
  • Do you want to restrict registration to certain events to logged in members only but still have other events open to the public? If the answer to the above question is yes, then this extension is for you. This extension allows you to set a flag to any event so registration is restricted to those that have a current membership.
     

Categories: CRM

Call for establishment committee nominations

CiviCRM - Thu, 10/04/2018 - 19:22
Seeking community members to help shape CiviCRM’s future For the first time in its history, the diverse community of people who use, implement and improve CiviCRM is rallying around a major organized effort to increase the sustainability and transparency of the project. We are pleased to invite members of our community to take part in leading a two-month process that will define a community representation system for the CiviCRM open-source project. See below for information on how to get involved.  
Categories: CRM

Cloudera 2.0: Cloudera and Hortonworks Merge to form a Big Data Super Power

Talend - Thu, 10/04/2018 - 18:39

We’ve all dreamed of going to bed one day and waking up the next with superpowers – stronger, faster and even perhaps with the ability to fly.  Yesterday that is exactly what happened to Tom Reilly and the people at Cloudera and Hortonworks.   On October 2nd they went to bed as two rivals vying for leadership in the big data space. In the morning they woke up as Cloudera 2.0, a $700M firm, with a clear leadership position.  “From the edge to AI”…to infinity and beyond!  The acquisition has made them bigger, stronger and faster. 

Like any good movie, however, the drama is just getting started, innovation in the cloud, big data, IoT and machine learning is simply exploding, transforming our world over and over, faster and faster.  And of course, there are strong villains, new emerging threats and a host of frenemies to navigate.

What’s in Store for Cloudera and Hortonworks 2.0

Overall, this is great news for customers, the Hadoop ecosystem and the future of the market.  Both company’s customers can now sleep at night knowing that the pace of innovation from Cloudera 2.0 will continue and accelerate.  Combining the Cloudera and Hortonworks technologies means that instead of having to pick one stack or the other, now customers can have the best of both worlds. The statement from their press release “From the Edge to AI” really sums up how complementary some of the investments that Hortonworks made in IoT complement Cloudera’s investments in machine learning.  From an ecosystem and innovation perspective, we’ll see fewer competing Apache projects with much stronger investments.  This can only mean better experiences for any user of big data open source technologies.

At the same time, it’s no secret how much our world is changing with innovation coming in so many shapes and sizes.  This is the world that Cloudera 2.0 must navigate.  Today, winning in the cloud is quite simply a matter of survival.  That is just as true for the new Cloudera as it is for every single company in every industry in the world.  The difference is that Cloudera will be competing with a wide range of cloud-native companies both big and small that are experiencing explosive growth.  Carving out their place in this emerging world will be critical.

The company has so many of the right pieces including connectivity, computing, and machine learning.  Their challenge will be, making all of it simple to adopt in the cloud while continuing to generate business outcomes. Today we are seeing strong growth from cloud data warehouses like Amazon RedshiftSnowflakeAzure SQL Data Warehouse and Google Big Query.  Apache Spark and service players like Databricks and Qubole are also seeing strong growth.  Cloudera now has decisions to make on how they approach this ecosystem and they choose to compete with and who they choose to complement.

What’s In Store for the Cloud Players

For the cloud platforms like AWS, Azure, and Google, this recent merger is also a win.  The better the cloud services are that run on their platforms, the more benefits joint customers will get and the more they will grow their usage of these cloud platforms.  There is obviously a question of who will win, for example, EMR, Databricks or Cloudera 2.0, but at the end of the day the major cloud players will win either way as more and more data, and more and more insight runs through the cloud.

Talend’s Take

From a Talend perspective, this recent move is great news.  At Talend, we are helping our customers modernize their data stacks.  Talend helps stitch together data, computing platforms, databases, machine learning services to shorten the time to insight. 

Ultimately, we are excited to partner with Cloudera to help customers around the world leverage this new union.  For our customers, this partnership means a greater level of alignment for product roadmaps and more tightly integrated products. Also, as the rate of innovation accelerates from Cloudera, our support for what we call “dynamic distributions” means that customers will be able to instantly adopt that innovation even without upgrading Talend.  For Talend, this type of acquisition also reinforces the value of having portable data integration pipelines that can be built for one technology stack and can then quickly move to other stacks.  For Talend and Cloudera 2.0 customers, this means that as they move to the future, unified Cloudera platform, it will be seamless for them to adopt the latest technology regardless of whether they were originally Cloudera or Hortonworks customers. 

You have to hand it to Tom Reilly and the teams at both Cloudera and Hortonworks.  They’ve given themselves a much stronger position to compete in the market at a time when people saw their positions in the market eroding.  It’s going to be really interesting to see what they do with the projected $125 million in annualized cost savings.  They will have a lot of dry powder to invest in or acquire innovation.  They are going to have a breadth in offerings, expertise and customer base that will allow them to do things that no one else in the market can do. 

The post Cloudera 2.0: Cloudera and Hortonworks Merge to form a Big Data Super Power appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Tips for enhancing your data lake strategy

SnapLogic - Thu, 10/04/2018 - 18:10

As organizations grapple with how to effectively manage ever voluminous and varied reservoirs of big data, data lakes are increasingly viewed as a smart approach. However, while the model can deliver the flexibility and scalability lacking in traditional enterprise data management architectures, data lakes also introduce a fresh set of integration and governance challenges that[...] Read the full article here.

The post Tips for enhancing your data lake strategy appeared first on SnapLogic.

Categories: ETL

Why Cloud-native is more than software just running on someone else’s computer

Talend - Thu, 10/04/2018 - 10:17

The cloud is not “just someone else’s computer”, even though that meme has been spreading so fast on the internet. The cloud consists of extremely scalable data centers with highly optimized and automated processes. This makes a huge difference if you are talking about the level of application software.

So what is “cloud-native” really?

“Cloud-native” is more than just a marketing slogan. And a “cloud-native application” is not simply a conventionally developed application which is running on “someone else’s computer”. It is designed especially for the cloud, for scalable data centers with automated processes.

Software that is really born in the cloud (i.e. cloud-native) automatically leads to a change in thinking and a paradigm shift on many levels. From the outset, cloud-native developed applications are designed with scalability in mind and are optimized with regard to maintainability and agility.

They are based on the “continuous delivery” approach and thus lead to continuously improving applications. The time from development to deployment is reduced considerably and often only takes a few hours or even minutes. This can only be achieved with test-driven developments and highly automated processes.

Rather than some sort of monolithic structure, applications are usually designed as a loosely connected system of comparatively simple components such as microservices. Agile methods are practically always deployed, and the DevOps approach is more or less essential. This, in turn, means that the demands made on developers increase, specifically requiring them to have well-founded “operations” knowledge.

Download The Cloud Data Integration Primer now.
Download Now

Cloud-native = IT agility

With a “cloud-native” approach, organizations expect to have more agility and especially to have more flexibility and speed. Applications can be delivered faster and continuously at high levels of quality, they are also better aligned to real needs and their time to market is much faster as well. In these times of “software is eating the world”, where software is an essential factor of survival for almost all organizations, the significance of these advantages should not be underestimated.

In this context: the cloud certainly is not “just someone else’s computer”. And the “Talend Cloud” is more than just an installation from Talend that runs in the cloud. The Talend Cloud is cloud-native.

In order to achieve the highest levels of agility, in the end, it is just not possible to avoid changing over to the cloud. Potentially there could be a complete change in thinking in the direction of “serverless”, with the prospect of optimizing cost efficiency as well as agility.  As in all things enterprise technology, time will tell. But to be sure, cloud-native is an enabler on the rise.

About the author Dr. Gero Presser

Dr. Gero Presser is a co-founder and managing partner of Quinscape GmbH in Dortmund. Quinscape has positioned itself on the German market as a leading system integrator for the Talend, Jaspersoft/Spotfire, Kony and Intrexx platforms and, with their 100 members of staff, they take care of renowned customers including SMEs, large corporations and the public sector. 

Gero Presser did his doctorate in decision-making theory in the field of artificial intelligence and at Quinscape he is responsible for setting up the business field of Business Intelligence with a focus on analytics and integration.

The post Why Cloud-native is more than software just running on someone else’s computer appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

CiviCRM 5.6 release

CiviCRM - Thu, 10/04/2018 - 06:45
CiviCRM version 5.6 is now released and ready to download.  RELEASE NOTES: Big thanks to Andrew Hunt from AGH Strategies for putting up together release notes for this version.  The release notes for 5.6 can be accessed here.   SPECIAL THANKS:
Categories: CRM

Get the most out of your inbox with Vtiger’s new 2-Way Gmail sync

VTiger - Thu, 10/04/2018 - 02:23
Although we live in the social media era, emails are still the primary communication channel for most organizations. So, it is only natural that a person’s productivity and performance directly depends on the ability to efficiently manage their inbox across several different email platforms. That is where Vtiger’s new 2-way sync feature with Gmail comes […]
Categories: CRM

Adding 2FA to Liferay DXP 7.1

Liferay - Wed, 10/03/2018 - 12:09

We recently had a requirement to add 2 Factor Authentication support for a demo, so I am pleased to share our implementation with the community.

 

Login

On login the user sees a new 'Authenticator Code' field below Password:

 

 

The user populates their credentials, launches Google Authenticator app (or other 2FA app) on their phone and gets their code:

 

 

The user enters it on screen, clicks Sign In and hey presto, they have logged in with 2FA.

 

User setup

QR Codes are used to share the profile details with the end user:

 

 

These are shared with the end user by email, and for convenience  (e.g. for Demos & testing) the QR Code is available through the Liferay profile screens (on the Password tab):

 

 

Rollout

To simplify rollout:

  • QR Codes used to configure the 2FA app. (Alternatively the user can manually configure the 2FA app.)
  • Users created after the full set of application modules are deployed will automatically be assigned a Secret Key on account creation and will be emailed a link to the QR Code.
  • There is an optional activator bundle that will generate Secret Keys and email QR Codes to all users.
  • Administrators can bypass 2FA and a custom User Group can be created to allow certain users to bypass 2FA if required.

 

Source & Documentation

The source is available here: https://github.com/michael-wall/liferay-2fa including a readme with deployment steps and more information on configuration, limitations (e.g. storing Secret Keys in plain text) etc.

Michael Wall 2018-10-03T17:09:00Z
Categories: CMS, ECM

Cardless Payment Solutions: The Good, The Bad, and What it Means For Your Business

PrestaShop - Wed, 10/03/2018 - 11:43
There’s a very real possibility that we’ll soon be able to walk into a shop and pay with our faces - no card, no phone, just our unique retina pattern.
Categories: E-commerce

Give your users customised automated tours of civicrm

CiviCRM - Tue, 10/02/2018 - 18:57

The Civi Summit was a great event - full of lots of nice surprises. One that stands out for me was that what started out as some wishful thinking - namely having the ability to provide on-page tours/tutorials - ended up with us being able to beta test a 'proof of concept' before we left.

While some amongst us (ahem) were sampling the whiskey and fine IPAs late in the evening along to the strumming of the musically-able folk, others remained focussed on their laptops - and in Coleman's case this meant getting us a working prototype of a tour/tutorial system for civi pages.

Categories: CRM
Syndicate content