AdoptOS

Assistance with Open Source adoption

Open Source News

Unconference 2018

Liferay - Wed, 12/05/2018 - 05:16

//The english version of this article can be found here:   Unconference 2018.

El 6 de Noviembre, día anterior a comenzar la DevCon 2018, tuvo lugar la Unconference de Liferay en Pakhuis de Zwijger. Había leído sobre este tipo de sesiones pero nunca había participado en ninguna. Spoiler, me encantó.

Si ya has participado en alguna, sabrás que en estas sesiones no se tiene una agenda previamente definida, si no que se define en la propia sesión y por los asistentes. Por si no, explicaré un poco cómo fue y así sabrás su funcionamiento.

Primero, Olaf nos hizo una introducción en la que nos habló de la distribución de las zonas, comida, organización y, por supuesto, los 4 principios y la ley.

Los 4 principios:

  • Cualquier persona que esté en la reunión es la persona adecuada (ya que está mostrando interés al asistir).
  • Cualquier cosa que esté sucediendo es la única cosa que podremos llegar a tener (hay que pensar en lo que está ocurriendo en ese momento y en ese lugar).
  • Sea cual sea el momento en que comience, es el momento correcto (hace referencia al comienzo de las sesiones, se pueden comenzar sin esperar a nada en concreto, como que estén todos, etc.).
  • Cuando se acabó, se acabó (También se suele poner con la forma negativa, Hasta que no se ha acabado, no ha terminado; así se indica que se aproveche el tiempo lo máximo posible).

La ley:

  • Ley de los dos pies: Si no aportas o no te aporta, puedes irte (si en algún momento sientes que no estás aprendiendo o contribuyendo en nada, usa tus dos pies, es decir, puedes irte a otra reunión. La idea es que nadie debería estar en una reunión que considera aburrida).

Una aclaración que se hizo, que me resultó muy importante respecto a la mentalidad que se respiraba, fue: “si propones una sesión y no va nadie, tienes dos opciones, buscar otra sesión que te interese o utilizar esa hora, que se ha reservado para el tema, y trabajar en ello”.

Una vez aclarados todos los términos se comenzó a crear la agenda con las las propuestas de los asistentes.

Y por último, antes de comenzar con las sesiones, se dedicó un tiempo para reorganizar la agenda. Nota importante, sólo quién ha propuesto la sesión, puede cambiarla, por lo que si se quiere cambiar alguna sesión, por ejemplo, porque coincide con otra, se tendrá que hablar con las personas que la han propuesto para ver si están de acuerdo.

 10111213141516 AStaging 2.0 - change listLiferay in action at the university # Rest/HATEOAS vs.GraphQLSennaJS/Module loader with other JS libsLiferay AnalyticsWhat extension point are you missing? BAPIO?Liferay performance with more than 1 million usersLPersonalization - use cases & moreDDM, Forms & WorkflowSearch optimisation: boosting & filteringDXP Cloud C  UHow do you monitor your Liferay application & pluginsUsing multitenancy (more instances in one installtion) - a good idea?mongoDB & LiferayAudience Targeting to guest user: how to do it? D"Internal classes" - Why are some classes in the "internal" package not exported? How to customize?Container + Liferay: How to deploy or upgrade?NLiferay intelliJ Plugin - Language Server ProtocolAdministration experiences GDPR EMigration experiences: LR 6.2 --> 7.0Liferay as a headless CMS - best practicesCLiferay + TensorFlowData integration, ETL/ESB | Experiences & methods integrating external systemsMobile with LiferayWorkspace: tips & tricks | How you use/extend workspace? Use plugins? FLiferay CommerceConfig & content orchestration - site initializersHYour experience with Liferay + SSO DXP vs. CE  GReact? #Media/Video - server user upload/integrationDocumentation: What is missing?Making it easier for business users to build sites (modern site building)Liferay for frontend developers H  #    

Entre las charlas a las que pude ir, algunas recomendaciones que se indicaron:

  • Experiencia en la migración : LR 6.2 --> 7.0           
    • Para migrar la base de datos a una open source, es mejor hacer la migración primero de la base de datos y después la versión de Liferay.
  • Orquestación de config & conten– inicializadores de sitios            
    • Hay varias propuestas, una de ellas creada por el equipo de Commerce
    • Se está analizando incluirlo en las nuevas versiones.
  • Liferay + TensorFlow
    • Estará disponible en la versión 7.2, aunque se puede descargar ya desde github
    • Estará a nivel de asset, e implementado para imágenes.  
  • Cómo monitorizar tus aplicaciones y plugines de Liferay.  
    • Ver los thread en jvm           
    • Para analizar los dump de threads, muy útil -> http://fastthread.io
  • Documentation: What is missing?
    • Slack vs fórum – cómo gestionar las preguntas y para no perder información.
  • DXP vs. CE   
    • Se habló, entre otro temas, sobre la gestión de las dos versiones y opciones, como dejar el código de ambas versiones fuese igual y pagar la licencia por ejemplo, por el soporte.
  • Workspace: tips & tricks | How you use/extend workspace? Use plugins?          
    • Se recomienda, al pasar a la 7, ppr lo menos pasar los servicios a módul           
    • compile only - instruction call to proof

Este tipo de sesiones resultan interesantes tanto para los que queremos aprender de Liferay como para la propia gente de Liferay, ya que, por ejemplo, varias de las charlas fueron propuestas por ellos para tener feedback sobre las funcionalidades hechas o algunas que se están realizando.

Como conclusión del Unconference, se hizo una sesión de cierre en la que se pudo aportar la experiencia de cada uno.

De las cosas que más me gustaron y creo que tienen más valor:

  • Filosofía/Mentalidad   (la idea en la que se basa este tipo de conferencias)
  • Actitud (todo el mundo va a aportar, compartir y aprender)
  • Puedes  proponer temas que te interesen
  • Facilidad para compartir y aprender  (incluso con expertos en el área).   

Como comentaba al principio, ha sido una experiencia muy recomendable y que espero poder repetir en el futuro.

 

Álvaro Saugar 2018-12-05T10:16:00Z
Categories: CMS, ECM

New learning material on Liferay University

Liferay - Tue, 12/04/2018 - 11:41

Have you checked out Liferay University by now? Or even better: Liferay Passport - the all-inclusive version of University? If you did, you might want to come back and check the new content. If you haven't... Why?

Since my last update, we've added two more free lessons and one full course

And just as before - there's more to come. I barely could write this announcement because I was busy recording more material to be released soon.

Learn as much as you can

The offer that you can't refuse: Liferay University Passport, our flat rate for each and every course on Liferay University, is available for an introductory price at almost 30% discount.  It includes personal access to all material on university for a full year - learn as much as you can. The offer will expire at the end of the year - so sign up quickly, or regret.

Prefer a live trainer in the room?

Of course, if you prefer to have a live trainer in the room: The regular trainings are still available, and are updated to  contain all of the courses that you find on Liferay University and Passport. And, this way (with a trainer, on- or offline) you can book courses for all of the previous versions of Liferay as well.

And, of course, the fine documentation is still available and updated to contain information about the new version already.

(Photo: CC by 2.0 Hamza Butt)

Olaf Kock 2018-12-04T16:41:00Z
Categories: CMS, ECM

Just published extension Contact Specific API Defaults

CiviCRM - Tue, 12/04/2018 - 06:10

I have just published the CiviCRM native extension Contact Specific API Defaults which allows you to specify specific defaults for API fields for a contact.

The extension was funded by Mediwe and developed by CiviCooP based on this use case:

Categories: CRM

Membership Management online training: Wed. December 5th, 12 pm MT

CiviCRM - Mon, 12/03/2018 - 16:56

Cividesk is offering the Fundamentals of Membership Management training session for new users on Wednesday, December 5th starting at 11 am PT/ 12 pm MT/ 2 pm ET.  This training class will help you get started using the membership module by learning how configure membership types, create an online membership sign-up page, track current and expired members, create a membership report and much more.  

Categories: CRM

Getting Started with Talend Open Studio: Run and Debug Your Jobs

Talend - Mon, 12/03/2018 - 13:39

In the past blogs, we have learned how to install Talend Open Studio, how to build a basic job loading data into Snowflake, and how to use a tMap component to build more complex jobs. In this blog, we will enable you with some helpful debugging techniques and provide additional resources that you can leverage as you continue to learn more about Talend Open Studio. 

As with our past blogs, you are welcome to follow along in our on-demand webinar. This blog corresponds with the last video of the webinar.

In this tutorial, we will quickly address how to successfully debug your Talend Jobs, should you run into errors. Talend classifies errors into two main categories: Compile errors and Runtime errors. A Compile error prevents your Java code from compiling properly (this usually includes syntax errors or Java class errors). A Runtime error prevents your job from completing successfully, resulting in the job failing during execution.

In the previous blog, we designed a Talend job to generate a Sales report to get data into a Snowflake cloud data warehouse environment. For the purposes of this blog, we have altered that job, so when we try to run it, we will see both types of errors.  In this way, we can illustrate how to resolve both types of errors.

Resolving Compile Errors in Talend Open Studio

Let’s look at a compile error.  When we execute this job in Talend Studio, it will first attempt to compile, however, the compile will fail with the below error.

You can review java error details within the log, which states that the “quantity cannot be resolved or is not a field”. Conveniently, it also highlights the component the error is most closely associated to.

 To locate the specific source of the problem within the tMap component, you can either dive into the tMap and search yourself OR you can switch to the Code view. Although you cannot directly edit the code here, you can select the red box highlighted on the right of the scroll bar to bring you straight to the source of the issue.

In this case, the arithmetic operator is missing from the Unit Price and Quantity equation.

Next, head into the tMap component and make the correction to the Unit Price and Quantity equation by adding a multiplication operator (*) between Transactions.Unit_Price and Transactions.qty. Click Ok and now run the job again.

And now you see the compile error has been resolved.

Resolving Runtime Errors in Talend Open Studio

Next, the job attempts to send the data out to Snowflake. And a runtime error occurs. You can read the log and it says, “JDBC driver not able to connect to Snowflake” and “Incorrect username or password was specified”.

To address this issue, we’ll head to the Snowflake component and review the credentials. It looks like the Snowflake password was incorrect, so re-enter the Snowflake password, and click run again to see if that resolved the issue.

And it did! This job has been successfully debugged and the customer data has been published to the Snowflake database.

Conclusion

This was the last of our planned blogs on getting started with Talend Open Studio, but there are other resources that you can access to improve your skills with Talend Open Studio. Here are some videos that we recommend you look at to strengthen and add on to the skills that you have gained from these past four blogs:

Joining Two Data Sources with the tMap Component – This tutorial will give you some extra practice using tMap to join your data complete with downloadable dummy data and PDF instructions.

Adding Condition-Based Filters Using the tMap Component ­– tMap is an incredibly powerful and versatile component with many uses, and in this tutorial, you will learn how to use tMap and its expression builder to filter data based on certain criteria.

Using Context Variables – Learn how to use context variables, which allow you to run the same job in different environments.

For immediate questions, be sure to visit our Community, and feel free to let us know what types of tutorials would be helpful for you.  

The post Getting Started with Talend Open Studio: Run and Debug Your Jobs appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Bitcoin and Blockchain: are you reaping the benefits for your business?

PrestaShop - Mon, 12/03/2018 - 08:42
Bitcoin has just celebrated its 10th birthday on October 31. Anniversary set aside, it seems that there is no stopping to the cryptocurrency movement.
Categories: E-commerce

Extended Security Release Update

CiviCRM - Sun, 12/02/2018 - 10:27

CiviCRM Users by Version
Categories: CRM

Accelerate the Move to Cloud Analytics with Talend, Snowflake and Cognizant

Talend - Fri, 11/30/2018 - 18:20

In the last few years, we’ve seen the concept of the “Cloud Data Lake” has gained more traction in the enterprise. When done right, a data lake can provide the agility for Digital Transformation around customer experience by enabling access to historical and real-time data for analytics.

However, while the data lake is now a widely accepted concept both on-premises and in the cloud, organizations still have trouble making them usable and filling them with clean, reliable data. In fact, Gartner has predicted that through 2018, 90% of deployed data lakes will be useless.  This is largely due to the diverse and complex combinations of data sources and data models that are popping up more than ever before.            

Migrating enterprise analytics on-premises to the cloud requires significant effort before delivering value. Cognizant just accelerated your time to value with a new Data Lake Quickstart solution. In this blog, I want to show you how you can run analytics migration projects to the cloud significantly faster, deliver in weeks instead of months, with lower risk using this new Quickstart.

Cognizant Data Lake Quickstart with Talend on Snowflake

First, let’s start by going into detail on what this Quickstart solution is comprised of. The Cognizant Data Lake Quickstart Solution includes:

  • A data lake reference architecture based on:
    • Snowflake, the data warehouse built for the cloud
    • Talend Cloud platform
    • Amazon S3 and Amazon RDS
  • Data migration from on-premises data warehouses (Teradata/Exadata/Netezza) to Snowflake using metadata migration
  • Pre-built jobs for data ingestion and processing (pushdown to Snowflake and EMR)

Data Lake Reference Architecture

How It Works
  • Uses Talend to extract data files from on-premises (structured/semi-structured) and ingest into Amazon S3 using a metadata-based approach to store data quality rules and target layout
  • Stores data on Amazon S3 as an enterprise data lake for processing
  • Leverages the Talend Snowflake data loader to move files to Snowflake from Amazon S3
  • Runs Talend jobs on execution connecting to Snowflake and process data
Data Migration from On-premises Data Warehouse (Teradata/Exadata/Netezza) to Snowflake

For data migration projects, the metadata-based migration framework leverages Talend and Snowflake. Both source and target (Snowflake) metadata (Schema, tables, columns and datatype) are captured in the metadata repository using a Talend ETL process. The data migration is executed using Talend and Snowflake Copy utility.

Pre-built Jobs for Data ingestion and Processing

For incremental data loads, Cognizant has included pre-built Talend jobs that support data loads from source systems into the Amazon S3 layer, further into Snowflake Staging. These jobs then transform and load the data into Snowflake Presentation layer tables using Snowflake compatible SQL. Another option is to have pre-built jobs use the Amazon S3 layer to build a conformed layer in S3 using AWS EMR and Talend Spark components then later load the conformed data directly into Snowflake Presentation layer tables.

Conclusion

So, what are the benefits of this Quickstart architecture? Let’s review:

  • Cost optimization – Up to 50% reduction in initial setup effort to migrate to Snowflake
  • Simplification – Template based approach to facilitate Infrastructure setup and Talend jobs
  • Faster time to market – Deliver in weeks instead of months.
  • Agility: Any changes to migration mainly consist of changes only to metadata without any code change. Self-service mechanism to onboard new sources, configurations, environments, etc. just by providing metadata with minimal Talend technical expertise. It’s also easy to maintain as all data migration configurations are maintained in a single metadata repository.

Now go out and get your cloud data lake up and running quickly. Comment below and let me know what you think!

The post Accelerate the Move to Cloud Analytics with Talend, Snowflake and Cognizant appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

What the Healthcare Industry Can Teach Companies About Their Data Strategy

Talend - Fri, 11/30/2018 - 14:55

The information revolution – which holds the promise of a supercharged economy through the use of advanced analytics, data management technologies, the cloud, and knowledge – is affecting every industry. Digital transformation requires major IT modernization and the ability to shorten time data to insights to make the right business decisions. For companies, it means being able to efficiently process and analyze data from a variety of sources at scale. All this in the hope to streamline operations, enhance customer relationship, and provide new and improved products and services.

The healthcare and pharmaceutical industries are the perfect embodiment of what is at stake with the data revolution. Opportunities lie at all the steps of the health care value chain for those who succeed in their digital transformation:

  • Prevention: Predicting patients at risk for disease or readmission.
  • Diagnosis: Accurately diagnosing patient conditions, matching treatments with outcomes.
  • Treatment: Providing optimal and personalized health care through the meaningful use of health information.
  • Recovery and reimbursement: Reducing healthcare costs, fraud and avoidable healthcare system overuse. Providing support for reformed value-based incentives for cost effective patient care, effective use of Electronic Health Records (EHR), and other patient information.

Being able to unlock the relevance of healthcare data is the key to having a 360-view of the patient and, ultimately, delivering better care.

Data challenges in the age of connected care

But that’s simpler said than done. The healthcare industry faces the same challenge as others in that often business insights are missed due to speed of change and the complexity of mounting data users and needs. Healthcare organizations have to deal with massive amounts of data housed in a variety of data silos such as information from insurance companies, patient records from multiple physicians and hospitals. To access this data and quickly analyze healthcare information, it is critical to break down the data silos.

Healthcare organizations are increasingly moving their data warehouse to a cloud-based solution and creating a single, unified platform for modern data integration and management across cloud and on-premise environments. Cloud-based integration solutions provide broad and robust connectivity, data quality, and governance tracking, simple pricing, data security and big data analysis capabilities.

Decision Resources Group (DRG) finds success in the cloud

Decision Resources Group (DRG) is a good example of the transformative power of the cloud for healthcare companies. DRG provides healthcare analytics, data and insight products and services to the world’s leading pharma, biotech and medical technology companies. To extend its competitive edge, DRG made the choice to build a cloud data warehouse to support the creation of its new Real-World Data Platform, a comprehensive claim and electronic health record repository that covers more than 90% of the US healthcare system. With this platform, DRG is tracking the patient journey, identifying influencers in healthcare decision making and segmenting data so that their customers have access to relevant timely data for decision making.

DRG determined that their IT infrastructure could not scale to handle the petabytes of data that needed to be processed and analyzed. They looked for solutions that contained a platform with a SQL engine that works with big data and could run on Amazon Web Services (AWS) in the cloud.

DRG selected data integration provider Talend and the Snowflake cloud data warehouse as the foundation of its new Real-World Data Platform. With an integration with Spark for advanced machine learning and Tableau for analysis, DRG gets scalable compute performance without complications allowing their developers to build data integration workflows without much coding involved. DRG now has the necessary infrastructure to accommodate and sustain massive growth in data assets and user groups over time and is able to perform big data analytics at the speed of cloud. This is the real competitive edge.

The right partner for IT modernization

When it came to its enterprise information overhaul, DRG is not the only healthcare company that made the choice to modernize in the cloud. AstraZeneca, the world’s seventh-largest pharmaceutical company, chose to build a cloud data lake with Talend and AWS for its digital transformation. This architecture enables them to scale up and scale down based on business needs.

Healthcare and pharmaceutical companies are at the forefront of a major transformation across all industries requiring the use of advanced analytics and big data technologies such as AI and machine learning to process and analyze data to provide insights into the data. This digital transformation requires IT modernization, using hybrid or multi-cloud environments and providing a way to easily combine and analyze data from various sources and formats. Talend is the right partner for these healthcare companies, but also for any other company going through digital transformation.

Additional Resources: 

Read more about DRG case study https://www.talend.com/customers/drg-decision-resources-group/

Read more about AstraZeneca case study https://www.talend.com/customers/astrazeneca/

Talend cloud https://www.talend.com/products/integration-cloud/

The post What the Healthcare Industry Can Teach Companies About Their Data Strategy appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Meet Aleksandr, Ambassador of the month | November 2018

PrestaShop - Fri, 11/30/2018 - 11:39
This month meet Aleksandr, PrestaShop Ambassador in Talinn, Estonia!
Categories: E-commerce

Three reasons to move your on-premises data architecture to the cloud

SnapLogic - Thu, 11/29/2018 - 15:35

Most companies only use 5 to 10 percent of the data they collect. So estimates Beatriz Sanz Sai, a 20-year veteran in advanced analytics and the head of Ernst & Young’s global data and analytics practice. While it’s impossible to validate such a claim, the fact is many organizations are gathering lots of data but[...] Read the full article here.

The post Three reasons to move your on-premises data architecture to the cloud appeared first on SnapLogic.

Categories: ETL

Liferay Security Announcement: TLS v1.0

Liferay - Thu, 11/29/2018 - 01:00
Update: This has been moved to January 11, 2019. Reason for the changes

The vulnerabilities in TLS 1.0 (and SSL protocols) include POODLE and DROWN. Due to these security risks, Liferay decided to disable TLS 1.0, as many other companies have done.

Moving to TLS 1.1 and higher will allow users to keep communications between Liferay and Liferay.com secure.

What TLS version Liferay systems are going to support

We will support TLS 1.1 and above.

Affected Liferay Services and Websites

Liferay Portal CE and Liferay DXP Functionality

  • Marketplace

Liferay DXP Functionality

  • Licensing (via order id, EE only)

Liferay Websites

  • api.liferay.com

  • cdn.lfrs.sl

  • community.liferay.com

  • customer.liferay.com

  • demo.liferay.com

  • dev.liferay.com

  • downloads.liferay.com

  • forms.liferay.com

  • learn.liferay.com

  • liferay.com

  • liferay.com.br

  • liferay.com.cn

  • liferay.de

  • liferay.es

  • liferay.org

  • marketplace.liferay.com

  • mp.liferay.com

  • origin.lfrs.sl

  • partner.liferay.com

  • services.liferay.com

  • support.liferay.com

  • translate.liferay.com

  • www.liferay.com

  • releases.liferay.com (tentative)

  • repository.liferay.com (tentative)

Deployment Impact

There are Liferay Portal CE/EE and Liferay DXP functionalities and applications that make outbound connections to remote servers (including Liferay services and websites). Server administrators should review their deployment configurations and adjust them (if needed) to enable initiating secure connections using a higher TLS protocol version and to prevent falling back to TLS 1.0.

Mitigation Notes for Deployments Technical Information
  • On Java 8, the default client-side TLS version is TLS 1.2 (TLS 1.1 is also supported and enabled). Java 8 also introduced a new system property called jdk.tls.client.protocols to configure which protocols are enabled.

  • On Java 7, the default client-side TLS version is TLS 1.0, but TLS 1.1 and 1.2 are also supported, though must be enabled manually. As of Java 7u111, TLS 1.2 is also enabled by default, though this update is available for Oracle Subscribers only.

    • The system property, jdk.tls.client.protocols, is available as of Java 7u95 (for Oracle Subscribers only).

  • On Java 6, the default and only client-side TLS version is TLS 1.0. As of Java 6u111, TLS 1.1 is also supported, though this update is available for Oracle Subscribers only.

  • There is another Java system property available called https.protocols, which controls the protocol version used by Java clients in certain cases (see details on Oracle's blog: Diagnosing TLS, SSL, and HTTPS).

As a result of these, Liferay Portal CE and DXP deployments are affected differently.

Liferay Portal CE/DXP 7.0 and 7.1

Liferay Portal CE 7.0 and Liferay DXP 7.0 and above require Java 8, so these deployments have TLS 1.2 enabled by default and ensure that outbound connections can use higher secure protocol versions. To improve your server's security, Liferay recommends disabling TLS 1.0 for clients (outbound connections) using the system properties mentioned above.

Liferay Portal CE/EE 6.1 and 6.2

Liferay Portal 6.2 CE/EE and 6.1 EE GA3 versions support Java 8, which has TLS 1.2 enabled by default. Liferay Portal CE 6.1 does not support Java 8.   Liferay recommends disabling TLS 1.0 for clients (outbound connections) using the system properties mentioned above.

Liferay Portal EE 6.1 and Liferay Portal CE/EE 6.2 deployments running on Java 7 should consider moving to Java 8. Liferay Portal 6.1 CE deployments should consider upgrading to a newer version with Java 8 support.  There is a known issue that prevents enabling TLS 1.1/1.2 manually using the system properties mentioned earlier.

Note for Deployments - Inbound Traffic

Liferay also recommends that server administrators disable support for TLS 1.0 and enable higher TLS protocols for inbound traffic on all Liferay Portal CE/EE and Liferay DXP deployments. The actual settings to enable and configure TLS can vary on each deployment, so system administrators should consult with their Application Server documentation and apply the necessary changes.

Related Resources Jamie Sammons 2018-11-29T06:00:00Z
Categories: CMS, ECM

Talend and Red Hat OpenShift Integration: A Primer

Talend - Wed, 11/28/2018 - 14:41

One of the aspects I am always fascinated about Talend is its ability to run programs according to multiple job execution methodologies. Today I wanted to write an overview of a new way of executing data integration jobs using Talend and Red Hat OpenShift Platform.

First and foremost, let us do a quick recap of the standard ways of running Talend jobs. Users usually run Talend jobs using Talend schedulers which can be either in the Cloud or On-premise. Other methods include creating standalone jobs, building web services from Talend Jobs, building OSGI Bundle for ESB and the latest entry to this list from Talend 7.1 onwards is building the job as Docker image. For this blog, we are going to focus on the Docker route and show you how Talend Data Integration jobs can be used with Red Hat OpenShift Platform. 

I would also highly recommend reading two other interesting Talend blogs related to the interaction between Talend and Docker, which are:

  1. Going Serverless with Talend through CI/CD and Containers by Thibaut Gourdel
  2. Overview: Talend Server Applications with Docker by Michaël Gainhao 

Before going to other details, let’s get into the basics of containers, Docker and Red Hat OpenShift Platform. For all those are already proficient in container technology, I would recommend skipping ahead to the next section of the blog.

Containers, Docker, Kubernetes and Red Hat OpenShift

What is a container? A container is a standardized unit of software which is quite lightweight and can be executed without environment related constraints. Docker is the most popular container platform and it has helped the Information technology industry in two major fronts i.e. reduction in the infrastructure and maintenance cost and reduction in turnaround time to bring applications to market. 

The diagram above shows how the various levels Docker container platform and Talend jobs are stacked in application containers. The Docker platform interacts with underlying infrastructure and host operating system and it helps the application containers to run in a seamless manner without knowing the complexities of the underlying layers.

Kubernetes

Next, let us quickly talk about Kubernetes and how it has helped in the growth of container technology. When we are building more and more containers, we will need an orchestrator who can control the management, automatic deployment and scaling of the containers and Kubernetes is the software platform which does this orchestration in a magical way.

Kubernetes helps to coordinate a cluster of computers as a single unit and we can deploy containerized applications on top of the cluster. It consists of Pods which acts as logical host for the containers and these pods are running on top of worker machines in Kubernetes called Nodes. There are a lot of other concepts in Kubernetes but let us limit ourselves to the context of the blog since Talend Job containers are executed on top of these Pods.

Red Hat OpenShift

OpenShift is the open source container application platform from Red Hat which is built on top of Docker containers and Kubernetes container cluster manager. I am republishing the official OpenShift block diagram from Red Hat website for your quick reference.

OpenShift comes in a few flavors apart from the free (Red Hat OpenShift Online Starter) version.

  1. Red Hat OpenShift Online Pro
  2. Red Hat OpenShift Dedicated
  3. Red Hat OpenShift Container Platform

OpenShift Online Pro and Dedicated will be running on top of Red Hat hosted infrastructure and OpenShift Container Platform can be set up on top of customer’s own infrastructure.

Now let’s move to our familiar territory where we are planning to convert the Talend job to Docker container.

Talend Job Conversion to Container and Image Registry Storage

Considering the customers who are using older versions of Talend, we will first create a Docker image from a sample Talend job. Those who are already using Talend 7.1 version, you have the capability to export the Talend jobs to Docker as mentioned in the introduction section. So, you can safely move to the next section where the Docker image is already available and we will meet you there. People who are still with me, let us quickly build a Docker image for a sample job

Categories: ETL

Event Management online training - Friday, November 30th

CiviCRM - Tue, 11/27/2018 - 17:27

Learn the basics of customizing CiviEvent for your organization, the steps to create an online event, and how to manage and track event participants during this 2-hour online training session taught by Cividesk

Categories: CRM

Three ways API management transforms your organization

SnapLogic - Tue, 11/27/2018 - 15:50

In my previous blog post, “Future-proof your API lifecycle strategy,” I took a pretty nuts-and-bolts approach in explaining why companies are rethinking their application programming interface (API) lifecycle strategy for the future. Here I’ll take the discussion up a notch, to talk about three ways that a modern approach to API management can fundamentally change[...] Read the full article here.

The post Three ways API management transforms your organization appeared first on SnapLogic.

Categories: ETL

Getting Started with Talend Open Studio: Building a Complex tMap Job

Talend - Tue, 11/27/2018 - 15:00

In our previous blog, we walked through a simple job moving data from a CSV file into a Snowflake data warehouse.  In this blog, we will explore some of the more advanced features of the tMap component.

Similar to the last blog, you will be working with customer data in a CSV file and writing out to a Snowflake data warehouse; however, you will also be joining your customer CSV file with transaction data. As a result, you will need Talend Open Studio for Data Integration, two CSV data sources that you would like to join (in this example we use customer and transaction data sets), and a Snowflake warehouse for this tutorial. If you would like to follow a video-version of this tutorial, feel free to watch our on-demand webinar and skip to the fourth video.

First, we will join and transform customer data and transaction data.  As you join the customer data with transaction data, any customer data that does not find matching transactions will be pushed out to a tLogRow component (which will present the data in a Studio log following run time). The data that is successfully matched will be used to calculate top grossing customer sales before being pushed out into a Sales Report table within our Snowflake database.

Construct Your Job

Now, before beginning to work on this new job, make sure you have all the necessary metadata configurations in your Studio’s Repository.  As demonstrated in the previous blog (link to blog #2), you will need to import your Customer metadata, and you will need to use the same process to import your transaction metadata. In addition, you will need to import your Snowflake data warehouse as mentioned in the previous blog if you haven’t done so already.

So that you don’t have to start building a new job from scratch, you can take the original job that you created from the last blog (containing your customer data, tMap and Snowflake table) and duplicate it by right-clicking on the job and selecting Duplicate from the dropdown menu. Rename this new job – in this example we will be calling the new job “Generate_SalesReport”.

Now in the Repository you can open the duplicated job and begin adjusting the job as needed. More specifically, you will need to delete the old Snowflake output component and the Customers table configuration within t-Map. 

Once that is done, you can start building out the new flow. 

Start building out your new job by first dragging and dropping your Transactions metadata definition from the Repository onto the Design Window as a tFileInputDelimited component, connecting this new component to the tMap as a lookup.  An important rule-of-thumb to keep in mind when working with the tMap component is that the first source connected to a tMap is the “Main” dataset.  Any dataset linked to the tMap after the “Main” dataset is considered a “Lookup” dataset.

At this point it is a good idea to rename the source connections to the tMap.  Naming connections will come in handy when it’s time to configure the tMap components. To rename connections, perform a slow double-click on the connection arrow. The name will become editable.  Name the “Main” connection (the Customer Dataset) “Customers” and the “Lookup” connection (the Transactions dataset) “Transactions”.  Later, we will come back to this tMap and configure it to perform a full inner join of customer and transaction data.  For now, we will continue to construct the rest of the job flow.

To continue building out the rest of the job flow, connect a tLogRow component as an output from the tMap (in the same way as discussed above, rename this connection Cust_NoTransactions). This tLogRow will capture customer records that have no matching transactions, allowing you to review non-matched customer data within the Studio log after you run your job. In a productionalized job flow, this data would be more valuable within a database table making it available for further analysis, but for simplicity of this discussion we will just write it out to a log.

The primary output of our tMap consists of customer data that successfully joins to transaction data. Once joined, this data will be collected using a tAggregateRow component to calculate total quantity and sales of items purchased. To add the tAggregateRow component to the design window, either search for it within the Component Pallet and then drag and drop it into the Design Window OR click directly in the design window and begin typing “tAggregateRow” to automatically locate and place it into your job flow. Now, connect your tAggregateRow to the tMap and name the connection “Cust_Transactions”.

Next, you will want to sort your joined, aggregated data, so add the tSortRow component.

In order to map the data to its final destination–your Snowflake target table—you will need one more tMap. To distinguish between the two tMap components and their intended purposes, make sure to rename this tMap to something like “Map to Snowflake”.

Finally, drag and drop your Snowflake Sales Report table from within the Repository to your Design window and ensure the Snowflake output is connected to your job. Name that connection “Snowflake” and click “Yes” to get the schema of the target component.

As a best practice, give your job a quick look over and ensure you’ve renamed any connections or components with clear and descriptive labels. With your job constructed, you can now configure your components.

Configuring Your Components

First, double-click to open the Join Data tMap component configuration. On the left, you can see two source tables, each identified by their connection name. To the right, there are two output tables: one for the customers not matched to any transactions and one for the joined data.

Start by joining your customers and transactions data. Click and hold ID from within the Customers table and drag and drop it onto ID from within the Transactions table. The default join type in a tMap component is a Left Outer Join.  But you will want to join only those customer Id’s that have matching transactions, so switch the Join Model to an “Inner Join”. 

Within this joined table, we want to include the customer ID in one column and the customers’ full names on a separate column. Since our data has first name and last name as two separate columns, we will need to join them, creating what is called a new “expression”. To do this, drag and drop both the “first_name” and “last_name” columns onto the same line within the table.  We will complete the expression in a bit.

Similarly, we want the Quantity column from the transaction data on its own line, but we also want to use it to complete a mathematical expression. By dragging and dropping Unit Price and Quantity onto the same line within the new table, we can do just that.

You can now take advantage of the “Expression Builder”, which gives you even more control of your data. It offers a list of defined pre-coded functions that you can apply directly to this expression—I highly recommend that you look through the Expression Builder to see what it can offer. And even better, if you know the Java code for your action, you can enter it manually. In this first case, we want to concatenate the first and last names. After adding the correct syntax within the expression builder, click Ok. 

You will want to use the Expression Building again for your grouped transaction expression. With the Unit Price and Quantity expression, complete an arithmetic action to get the total transaction value by multiplying the Unit Price by the Quantity. Then, click Ok.

Remember, we set our Join Model to an Inner Join.  However, Talend offers a nice way to capture just the outer customers whom didn’t have transactions.  To capture these “rejects” from an Inner Join, first drag and drop ALL the fields from the customers table to the Cust_NoTransactions output table. Then, select the tool icon at the top right of this table definition and switch the “Catch lookup inner join reject” to “true”.

With the fields properly mapped, it is time to move on and review the data below. Rename the first_name field to be simply “name” (since it now includes the last name) and rename the Unit Price column to “transaction cost” (since it now has the mathematical expression applied). Then, ensure no further adjustments are necessary to the table’s column types to avoid any mismatched type conflicts through the flow. 

With this tMap properly configured, click Ok. And then click “Yes” to propagate the changes.

Next, you will need to configure the Aggregate component. To do this, enter the Component Tab (below the Design Workspace) and edit the schema.

To properly configure the output schema of my tAggregateRow component, first choose the columns on the left that will be grouped.  In this case we want to group by ID and Name.  So, select “id” and “name” and then clicking the yellow arrow button pointing to the right.  Next, we want to create two new output columns to store our aggregated values.  By clicking the green “+” button below the “Aggregate Sales (Output)” section you can add the desired number of output columns. First, create a new output column for the total quantity (“total_qty”) and identify it as an Integer type. And then create another for the total sales (“total_sales”) and set it as a double type. Next, click ok, making sure to choose to propagate the changes.

With the output schema configured properly within the tAggregateRow component, we can now configure the Group By and Operations Sections of the tAggregateRow component. To add your two Grouped By output columns and two Operations ouput columns, go back to the Component Tabs. Click the green plus sign below the Group By section twice and the Operations section twice to account for the output columns configured in the tAggregateRow schema. Then, in the Operations section, set the “total_qty” column’s general function as “sum” and identify the input column position as “qty”.  This configures the tAggregateRow component to add all the quantities from the grouped customer Id’s and output the total value in the “total_qty” column. Likewise, set the “total_sales” function as “sum” and input column position as “transaction_cost”.

Next, head to the sorting component and configure it to sort by total sales to help us identify who our highest paying customers are. To do this, click on the green “+” sign in the Component Tab, select “total_sales” in the Schema Column, and select “num” to ensure that your data is sorted numerically. Last, choose “desc” so your data will be shown to you in descending order.

Now, configure your final tMap component, by matching the customer name, total quantity and total sales. Then click Ok and click Yes to propagate the changes.

Finally, make sure your tLogRow component is set to present your data in table format, making it easier for you to read the inner join reject data.

Running Your Job

At last, you are ready to run your job!

Thanks to the tLogRow component, within the log, you can see the six customers that were NOT matched with transaction data.

If you head to Snowflake, you can view your “sales_report” worksheet and review the top customers in order of highest quantity and sales.

And that’s how to create a job that joins different sources, captures rejects, and presents the data the way you want it. In our next blog, we will be going through running and debugging your jobs. As always, please comment and let us know if there are any other basic skills you would like us to cover in a tutorial.

The post Getting Started with Talend Open Studio: Building a Complex tMap Job appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Joomla 3.9.1 Release

Joomla! - Tue, 11/27/2018 - 09:45

Joomla 3.9.1 is now available. This is a bug fix release for the 3.x series of Joomla including over 40 bug fixes and improvements.

Categories: CMS

Overriding the Favicon in DXP

Liferay - Mon, 11/26/2018 - 13:33

So perusing the web, you'll notice there's a couple of resources out there explaining how to override the default favicon in Liferay. Of course, the standard way is to add it to /images folder of your theme and apply the theme to the site. As long as you got your path right and it's the default name, "favicon.ico," it should work just fine. 

While this satisfies most use cases, what if you have more strict business requirements? For example, "Company logo must be displayed at all times while on the site," which includes while navigating on the control panel which has it's own theme. You might be inclined then to look into the tomcat folder and simply replace the file directly for all the themes. While this would work, this is not very maintainable since you need to do this every time you deploy a new bundle (which can be very frequent depending on your build process) and for every instance of Liferay you have running.

With DXP, we've introduced another way to override certain JSPs through dynamic includes. Although not every JSP can be overridden this way, luckily for us, top_head.jsp (where the favicon is set) can be. There are a few things you need to do when creating your dynamic include for this. 

First thing is you're going to want to register which extension point you'll be using for the JSP:

/html/common/themes/top_head.jsp#post

You're going to use "post" and not "pre" because the latest favicon specified is what is going to be rendered on the page. In other words, by adding our favicon after the default one, ours will take precedence. 

Next you're going to need the line that replaces:

<link href="<%= themeDisplay.getPathThemeImages() %>/<%= PropsValues.THEME_SHORTCUT_ICON %>" rel="icon" />

For that, we need to specify the image we're going to be using, which can be placed in our module's resource folder: src/main/resources/META-INF/resources/images. 

Next, we need a way of accessing our module's resources, which can be done with its web context path. In your bnd.bnd add the following line with your desired path name. For mine, I'll just use /favicon-override:

Web-ContextPath: /favicon-override

With that in place, we should be able to build our url now. The finished url will look something like: http://localhost:8080/o/favicon-override/images/favicon.ico. The components include the portal url, /o (which is used to access an OSGi module's resources), the web context path, and the path to the resource.

Putting it all together, your class should look something like this:

public class TopHeadDynamicInclude implements DynamicInclude { @Override public void include( HttpServletRequest request, HttpServletResponse response, String key) throws IOException { PrintWriter printWriter = response.getWriter(); ThemeDisplay themeDisplay = (ThemeDisplay)request.getAttribute( WebKeys.THEME_DISPLAY); StringBundler url = new StringBundler(4); url.append(themeDisplay.getPortalURL()); url.append("/o"); url.append(_WEB_CONTEXT_PATH); url.append(_FAVICON_PATH); printWriter.println("<link href=\"" + url.toString() + "\" rel=\"icon\" />"); } @Override public void register(DynamicIncludeRegistry dynamicIncludeRegistry) { dynamicIncludeRegistry.register("/html/common/themes/top_head.jsp#post"); } private static final String _WEB_CONTEXT_PATH = "/favicon-override"; private static final String _FAVICON_PATH = "/images/favicon.ico"; }

That's all there is to it! You could also add additional println statements to cover cases for mobile devices. The beauty of this solution is that you have a deployable module that can handle the configuration as opposed to manually doing it yourself which is prone to error. Also, by using dynamic includes, it's more maintainable when you need to upgrade to the next version of Liferay.

Github Repository: https://github.com/JoshuaStClair/favicon-override

Joshua St. Clair 2018-11-26T18:33:00Z
Categories: CMS, ECM

How about leveraging Liferay Forms by adding your own form field?

Liferay - Mon, 11/26/2018 - 09:22

Well, if you're reading this post, I can say you're interested, and maybe anxious, to find out how to create your own form field and deploy it to Liferay Forms, am I right? Therefore, keep reading and see how easy is to complete this task.

The first step we need to do is to install blade-cli ( by the way, what a nice tool! It boosts a lot your Liferay development speed! ), then just type the following:

blade create -t form-field -v 7.1 [customFormFieldNameInCamelCase]

Nice! But, I would like to create a custom form field in Liferay DXP 7.0, is this possible?

For sure! Try the command below:

blade create -t form-field -v 7.0 [customFormFieldNameInCamelCase]

Since version 3.3.0.201811151753 of blade-cli, the developer can choose to name his/her form field module using hyphens as a word separator, like in custom-form-field, or keep using the Camel Case format. Just to let you know, Liferay's developers use to name their modules using hyphens as a word separator.  :)

That's all Folks! Have a nice customization experience!

Renato Rêgo 2018-11-26T14:22:00Z
Categories: CMS, ECM

KNIME at AWS re:Invent Helping Machine Learning Builders

Knime - Thu, 11/22/2018 - 09:30
KNIME at AWS re:Invent Helping Machine Learning Builders hs Thu, 11/22/2018 - 15:30

Amazon Web Services re:Invent is taking place on November 26 -30, 2018 in Las Vegas, NV and KNIME will be there! This conference offers machine learning “Builder Sessions” and an “AWS Marketplace Machine Learning Hub” including KNIME Software. The goal of these hands-on sessions is to provide attendees expert guidance on machine learning building and data science automation for predictive quality management, predicting risk, and predicting excess stock inventory models.

NEW: KNIME Publishes ML Models in AWS Marketplace for Machine Learning

The new AWS Marketplace for Machine Learning lists KNIME workflow models ready to deploy to Amazon SageMaker. The KNIME models provide AWS Marketplace customers self-service, on-demand deployment for faster execution. KNIME workflow models are deployed as Docker containers for automated Amazon SageMaker delivery.

The KNIME workflow models available in the AWS Marketplace for Machine Learning are:

  • Simple Income Predictor. Use this KNIME workflow to accept personal attributes and applies them using a prebuilt classification model. It returns a prediction of income range including probability of correctness.
  • Simple Chemistry Binding Predictor. This KNIME workflow predicts the binding of a chemical compound to the 5HT6 receptor on CHEMBL Ki data.
  • Basic Credit Score Predictor. If you want to determine creditworthiness, this KNIME workflow executes a classification model that returns the predicated creditworthiness and a probability score.
  • Basic Churn Predictor. This KNIME workflow reads in information about a customer and predicts whether or not the customer is predicted to churn and the probability of the predication.

Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale.

AWS, KNIME, and other solutions in the AWS Marketplace, as well as AWS Consulting Partners help accelerate data experimentation, deeper insights, and new levels of productivity with Machine Learning solutions.

More on this topic is also available in this AWS blog post.

New trial offers

To help machine learning builders and data scientists get started fast, KNIME has a new free trial-and-subscribe offering in the flexible and on-demand AWS Marketplace. It’s called KNIME Server Small for AWS. Customers can trial open source KNIME Analytics Platform and KNIME Server Small products via the AWS Marketplace to continue developing machine learning models and building their data science platform expertise. KNIME Software complements many Amazon AWS services such as SageMaker, RedShift, Athena, EMR, Kinesis, and Comprehend as integrated services to provide enterprises with data science automation for accelerating great decision making quickly, securely, and at scale.

By launching these new offerings, we aim to provide KNIME on AWS customers self service control over their data science, machine learning, and AI pipelines, drive expertise and technology reuse, and facilitate interactions among data scientists, developers, and the business units they serve. The KNIME Software’s graphical workflow editing and development tools - covering data prep and mining, ETL, machine learning, deep learning, and deployment, in combination with native integrations and extensions such as Tensorflow, Keras, H2O.ai, Tableau, Spark, Hadoop, Python, R, REST, and many others, bring an agile and open approach to data science pipelines to help drive intelligent decision making across the enterprise. KNIME on AWS covers a diverse set of use cases from drug discovery to call center efficiency, automotive benchmarking, customer journey orchestration, and many more.

Visit KNIME at AWS re:Invent and check out how KNIME is being used in the Machine Learning Builder Sessions. We’ll also be in The Quad running KNIME Software demonstrations.

Get started today by using KNIME on AWS through the AWS Marketplace and leverage one of our hundreds of example workflows, workflows/models available from partners via KNIME Workflow Hub, or build your own using our own guided workflow assistant. Trial KNIME Server Small or KNIME Analytics Platform on AWS free for thirty days.

 

News date Fri, 11/23/2018 - 15:00
Categories: BI
Syndicate content