AdoptOS

Assistance with Open Source adoption

Open Source News
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.

How about leveraging Liferay Forms by adding your own form field?

Liferay - Mon, 11/26/2018 - 09:22

Well, if you're reading this post, I can say you're interested, and maybe anxious, to find out how to create your own form field and deploy it to Liferay Forms, am I right? Therefore, keep reading and see how easy is to complete this task. The first step we need to do is to install blade-cli ( by the way, what a nice tool! It boosts a lot your Liferay development speed! ), then just type the following: blade create -t form-field -v 7.1 [customFormFieldNameInCamelCase] Nice! But, I would like to create a custom form field in Liferay DXP 7.0, is this possible? For sure! Try the command below: blade create -t form-field -v 7.0 [customFormFieldNameInCamelCase] Since version 3.3.0.201811151753 of blade-cli, the developer can choose to name his/her form field module using hyphens as a word separator, like in custom-form-field, or keep using the Camel Case format. Just to let you know, Liferay's developers use to name their modules using hyphens as a word separator.  :) That's all Folks! Have a nice customization experience! Renato Rêgo 2018-11-26T14:22:00Z

Categories: CMS, ECM

KNIME at AWS re:Invent Helping Machine Learning Builders

Knime - Thu, 11/22/2018 - 09:30
KNIME at AWS re:Invent Helping Machine Learning Builders hs Thu, 11/22/2018 - 15:30

Amazon Web Services re:Invent is taking place on November 26 -30, 2018 in Las Vegas, NV and KNIME will be there! This conference offers machine learning “Builder Sessions” and an “AWS Marketplace Machine Learning Hub” including KNIME Software. The goal of these hands-on sessions is to provide attendees expert guidance on machine learning building and data science automation for predictive quality management, predicting risk, and predicting excess stock inventory models.

NEW: KNIME Publishes ML Models in AWS Marketplace for Machine Learning

The new AWS Marketplace for Machine Learning lists KNIME workflow models ready to deploy to Amazon SageMaker. The KNIME models provide AWS Marketplace customers self-service, on-demand deployment for faster execution. KNIME workflow models are deployed as Docker containers for automated Amazon SageMaker delivery. The KNIME workflow models available in the AWS Marketplace for Machine Learning are:

  • Simple Income Predictor. Use this KNIME workflow to accept personal attributes and applies them using a prebuilt classification model. It returns a prediction of income range including probability of correctness.
  • Simple Chemistry Binding Predictor. This KNIME workflow predicts the binding of a chemical compound to the 5HT6 receptor on CHEMBL Ki data.
  • Basic Credit Score Predictor. If you want to determine creditworthiness, this KNIME workflow executes a classification model that returns the predicated creditworthiness and a probability score.
  • Basic Churn Predictor. This KNIME workflow reads in information about a customer and predicts whether or not the customer is predicted to churn and the probability of the predication.
Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. AWS, KNIME, and other solutions in the AWS Marketplace, as well as AWS Consulting Partners help accelerate data experimentation, deeper insights, and new levels of productivity with Machine Learning solutions. More on this topic is also available in this AWS blog post. New trial offers To help machine learning builders and data scientists get started fast, KNIME has a new free trial-and-subscribe offering in the flexible and on-demand AWS Marketplace. It’s called KNIME Server Small for AWS. Customers can trial open source KNIME Analytics Platform and KNIME Server Small products via the AWS Marketplace to continue developing machine learning models and building their data science platform expertise. KNIME Software complements many Amazon AWS services such as SageMaker, RedShift, Athena, EMR, Kinesis, and Comprehend as integrated services to provide enterprises with data science automation for accelerating great decision making quickly, securely, and at scale. By launching these new offerings, we aim to provide KNIME on AWS customers self service control over their data science, machine learning, and AI pipelines, drive expertise and technology reuse, and facilitate interactions among data scientists, developers, and the business units they serve. The KNIME Software’s graphical workflow editing and development tools - covering data prep and mining, ETL, machine learning, deep learning, and deployment, in combination with native integrations and extensions such as Tensorflow, Keras, H2O.ai, Tableau, Spark, Hadoop, Python, R, REST, and many others, bring an agile and open approach to data science pipelines to help drive intelligent decision making across the enterprise. KNIME on AWS covers a diverse set of use cases from drug discovery to call center efficiency, automotive benchmarking, customer journey orchestration, and many more. Visit KNIME at AWS re:Invent and check out how KNIME is being used in the Machine Learning Builder Sessions. We’ll also be in The Quad running KNIME Software demonstrations. Get started today by using KNIME on AWS through the AWS Marketplace and leverage one of our hundreds of example workflows, workflows/models available from partners via KNIME Workflow Hub, or build your own using our own guided workflow assistant. Trial KNIME Server Small or KNIME Analytics Platform on AWS free for thirty days.   News date Fri, 11/23/2018 - 15:00
Categories: BI

Changing the Behavior of Scheduled Jobs

Liferay - Wed, 11/21/2018 - 22:27

Part of content targeting involves scheduled jobs that periodically sweep through several tables in order to remove older data. From a modeling perspective, this is as if content targeting were to make the assumption that all of those older events have a weight of zero, and therefore it does not need to store them or load them for modeling purposes.

If we wanted to evaluate whether this assumption is valid, we would ask questions like how much accuracy you lose by making that assumption. For example, is it similar to the small amount of accuracy you lose by identifying stop words for your content and removing them from a search index, or is it much more substantial? If you wanted to find out with an experiment, how would you design the A/B test to detect what you anticipate to be a very small effect size?

However, rather than look in detail at the assumption, today we're going to look at some problems with the assumption's implementation as a scheduled job.

Note: If you'd like to take a look at the proof of concept code being outlined in this post, you can check it out under example-content-targeting-override in my blogs code samples repository. The proof of concept has the following bundles:

  • com.liferay.portal.component.blacklist.internal.ComponentBlacklistConfiguration.config: a sample component blacklist configuration which disables the existing Liferay scheduled jobs for removing older data
  • override-scheduled-job: provides an interface ScheduledBulkOperation and a base class ScheduledBulkOperationMessageListener
  • override-scheduled-job-listener: a sample which consumes the configurations of the existing scheduled jobs to pass to ScheduledBulkOperationMessageListener
  • override-scheduled-job-dynamic-query: a sample implementation of ScheduledBulkOperation that provides the fix submitted for WCM-1490
  • override-scheduled-job-sql: a sample implementation of ScheduledBulkOperation that uses regular SQL to avoid one at a time deletes (assumes no model listeners exist on the audience targeting models)
  • override-scheduled-job-service-wrapper: a sample which consumes the ScheduledBulkOperation implementations in a service wrapper
Understanding the Problem We have four OSGi components responsible for content targeting's scheduled jobs to remove older data.
  • com.liferay.content.targeting.analytics.internal.messaging.CheckEventsMessageListener
  • com.liferay.content.targeting.anonymous.users.internal.messaging.CheckAnonymousUsersMessageListener
  • com.liferay.content.targeting.internal.messaging.CheckAnonymousUserUserSegmentsMessageListener
  • com.liferay.content.targeting.rule.visited.internal.messaging.CheckEventsMessageListener
Each of the scheduled jobs makes a service call (which by default, encapsulates the operation in a single transaction) to a total of five service builder services that perform the work for those scheduled jobs. Each of these service calls is implemented as an ActionableDynamicQuery in order to perform the deletion.
  • com.liferay.content.targeting.analytics.service.AnalyticsEventLocalService
  • com.liferay.content.targeting.anonymous.users.service.AnonymousUserLocalService
  • com.liferay.content.targeting.service.AnonymousUserUserSegmentLocalService
  • com.liferay.content.targeting.rule.visited.service.ContentVisitedLocalService
  • com.liferay.content.targeting.rule.visited.service.PageVisitedLocalService
These service builder services ultimately delete older data from six tables.
  • CT_Analytics_AnalyticsEvent
  • CT_Analytics_AnalyticsReferrer
  • CT_AU_AnonymousUser
  • CT_AnonymousUserUserSegment
  • CT_Visited_ContentVisited
  • CT_Visited_PageVisited
If you have enough older data in any of these tables, the large transaction used for the mass deletion will eventually overwhelm the database transaction log and cause the transaction to be rolled back (in other words, no data will be deleted). Because the rollback occurs due to having too much data, and none of this data was successfully deleted, this rollback will repeat with every execution of the scheduled job, ultimately resulting in a very costly attempt to delete a lot of data, with no data ever being successfully deleted. (Note: With WCM-1309, content targeting for Liferay 7.1 works around this problem by allowing the check to run more frequently, which theoretically prevents you from getting too much data in these tables, assuming you started using content targeting with Liferay 7.1 rather than in earlier releases.) Implementing a Solution When we convert our problem statement into something actionable, we can say that our goal is to update either the OSGi components or the service builder services (or both) so that the scheduled jobs which performs mass deletions do so across multiple smaller transactions. This will allow the transaction to succeed. Step 0: Installing Dependencies First, before we can even think about writing an implementation, we need to be able to compile that implementation. To do that, you'll need the API bundles (by convention, Liferay names these as .api bundles) for Audience Targeting. compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.analytics.api" compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.anonymous.users.api" compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.api" compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.rule.visited.api" With that in mind, our first road block becomes apparent when we check repository.liferay.com for our dependencies: one of the API bundles (com.liferay.content.targeting.rule.visited.api) is not available, because it's considered part of the enterprise release rather than the community release. To work around that problem, you will need to install all of the artifacts from the release .lpkg into a Maven repository and use those artifacts in your build scripts. This isn't fundamentally difficult to do, as one of my previous blog posts on Using Private Module Binaries as Dependencies describes. However, because Liferay Audience Targeting currently lives outside of the main Liferay repository there are two wrinkles: (1) the modules in the Audience Targeting distribution don't provide the same hints in their manifests about whether they are available in a public repository or not, and (2) looking up the version each time is a pain. To address both of these problems, I've augmented the script to ignore the manifest headers and to generate (and install) a Maven BOM from the .lpkg. You can find that augmented script here: lpkg2bom. After putting it in the same folder as the .lpkg, you run it as follows: ./lpkg2bom com.liferay.content-targeting "Liferay Audience Targeting.lpkg" Assuming you're using the Target Platform Gradle Plugin, you'd then add this to the dependencies section in your parent build.gradle: targetPlatformBoms group: "com.liferay.content-targeting", name: "liferay.audience.targeting.lpkg.bom", version: "2.1.2" If you're using the Spring dependency management plugin, you'd add these to the imports section of the dependencyManagement section in your parent build.gradle. mavenBom "com.liferay.content-targeting:liferay.audience.targeting.lpkg.bom:2.1.2" (Note: Rumor has it that we plan to merge Audience Targeting into the main Liferay repository as part of Liferay 7.2, so it's possible that the marketplace compile time dependencies situation isn't going to be applicable to Audience Targeting in the future. It's still up in the air whether it gets merged into the main public repository or the main private repository, so it's also possible that compiling customizations to existing Liferay plugins will continue to be difficult in the future.) Step 1: Managing Dependency Frameworks Knowing that we are dealing with service builder services, your initial plan might be to override the specific methods invoked by the scheduled jobs, because traditional Liferay wisdom is that the services are easy to customize in Liferay.
  • com.liferay.content.targeting.analytics.service.AnalyticsEventLocalService
  • com.liferay.content.targeting.anonymous.users.service.AnonymousUserLocalService
  • com.liferay.content.targeting.service.AnonymousUserUserSegmentLocalService
  • com.liferay.content.targeting.rule.visited.service.ContentVisitedLocalService
  • com.liferay.content.targeting.rule.visited.service.PageVisitedLocalService
If you attempt this, you will be blindslided by a really difficult part of the Liferay DXP learning curve: the intermixing of multiple dependency management approaches (Spring, Apache Felix Dependency Manager, Declarative Services, etc.), and how that leads to race conditions when dealing with code that runs at component activation. More succinctly, you will end up needing to find a way to control which happens first: your new override of the service builder service being consumed by the OSGi component firing the scheduled job, or the scheduled job firing for the first time. Rather than try to solve the problem, you can work around it by disabling the existing scheduled job via a component blacklist (relying on its status as a static bundle, which means it has a lower start level than standard modules), and then start a new scheduled job that consumes your custom implementation. blacklistComponentNames=["com.liferay.content.targeting.analytics.internal.messaging.CheckEventsMessageListener","com.liferay.content.targeting.anonymous.users.internal.messaging.CheckAnonymousUsersMessageListener","com.liferay.content.targeting.internal.messaging.CheckAnonymousUserUserSegmentsMessageListener","com.liferay.content.targeting.rule.visited.internal.messaging.CheckEventsMessageListener"] Let's take a moment to reflect on this solution design. Given that overriding the service builder service brings us back into a world where we're dealing with multiple dependency management frameworks, it makes more sense to separate the implementation from service builder entirely. Namely, we want to move from a world that's a mixture of Spring and OSGi into a world that is pure OSGi. Step 2: Setting up the New Scheduled Jobs Like all scheduled jobs, each of these scheduled jobs will register itself to the scheduler, asking the scheduler to call it at some frequency. protected void registerScheduledJob(int interval, TimeUnit timeUnit) { SchedulerEntryImpl schedulerEntry = new SchedulerEntryImpl(); String className = getClassName(); schedulerEntry.setEventListenerClass(className); Trigger trigger = triggerFactory.createTrigger( className, className, null, null, interval, timeUnit); schedulerEntry.setTrigger(trigger); _log.fatal( String.format( "Registering scheduled job for %s with frequency %d %s", className, interval, timeUnit)); schedulerEngineHelper.register( this, schedulerEntry, DestinationNames.SCHEDULER_DISPATCH); } If you're familiar only with older versions of Liferay, it's important to note that we don't control the frequency of scheduled jobs via portal properties, but rather with the same steps that are outlined in the tutorial, Making Your Applications Configurable. In theory, this would make it easy for you to check configuration values; simply get an instance of the Configurable object, and away you go. However, in the case of Audience Targeting, Liferay chose to make the configuration class and the implementation class private to the module. This means that we'll need to parse the configuration directly from the properties rather than be able to use nice configuration objects, and we'll have to manually code-in the default value that's listed in the annotation for the configuration interface class. protected void registerScheduledJob( Map<String, Object> properties, String intervalPropertyName, int defaultInterval, String timeUnitPropertyName) { int interval = GetterUtil.getInteger( properties.get(intervalPropertyName), defaultInterval); TimeUnit timeUnit = TimeUnit.DAY; String timeUnitString = GetterUtil.getString( properties.get(timeUnitPropertyName)); if (!timeUnitString.isEmpty()) { timeUnit = TimeUnit.valueOf(timeUnitString); } registerScheduledJob(interval, timeUnit); } With that boilerplate code out of the way, we assume that our listener will be provided with an implementation of the bulk deletion for our model. For simplicity, we'll call this implementation a ScheduledBulkOperation, which has a method to perform the bulk operation, and a method that tells us how many entries it will attempt to delete at a time. public interface ScheduledBulkOperation { public void execute() throws PortalException, SQLException; public int getBatchSize(); } To differentiate between different model classes, we'll assume that the ScheduledBulkOperation has a property model.class that tells us which model it's intended to bulk delete. Then, each of the scheduled jobs asks for its specific ScheduledBulkOperation by specifying a target attribute on its @Reference annotation. @Override @Reference( target = "(model.class=abc.def.XYZ)" ) protected void setScheduledBulkOperation(ScheduledBulkOperation ScheduledBulkOperation) { super.setScheduledBulkOperation(ScheduledBulkOperation); } Step 3: Breaking Up ActionableDynamicQuery There are a handful of bulk updates in Liferay that don't actually need to be implemented as large transactions, and so as part of LPS-45839, we added an (undocumented) feature to allow you to break those a large transaction wrapped inside an ActionableDynamicQuery into multiple smaller transactions. This was further simplified with the refactoring for LPS-46123, so that you could use a pre-defined constant in DefaultActionableDynamicQuery and one method call to get that behavior: actionableDynamicQuery.setTransactionConfig( DefaultActionableDynamicQuery.REQUIRES_NEW_TRANSACTION_CONFIG); You can probably guess that as a result of the feature being undocumented, when we implemented the fix for WCM-1388 to use an ActionableDynamicQuery to fix an OutOfMemoryError, we didn't make use of it. So even though we addressed the memory issue, if the transaction was large enough, the transaction was still doomed to be rolled back. So now we'll look towards our first implementation of a ScheduledBulkOperation: simply taking the existing code that leverages an ActionableDynamicQuery, and make it use a new transaction for each interval of deletions. For the most part, every implementation of our bulk deletion looks like the following, with a different service being called to get an ActionableDynamicQuery different name for the date column, and a different implementation of ActionableDynamicQuery.PerformAction for the individual delete methods. ActionableDynamicQuery actionableDynamicQuery = xyzLocalService.getActionableDynamicQuery(); actionableDynamicQuery.setAddCriteriaMethod( (DynamicQuery dynamicQuery) -> { Property companyIdProperty = PropertyFactoryUtil.forName( "companyId"); Property createDateProperty = PropertyFactoryUtil.forName( dateColumnName); dynamicQuery.add(companyIdProperty.eq(companyId)); dynamicQuery.add(createDateProperty.lt(maxDate)); }); actionableDynamicQuery.setPerformActionMethod(xyzDeleteMethod); actionableDynamicQuery.setTransactionConfig( DefaultActionableDynamicQuery.REQUIRES_NEW_TRANSACTION_CONFIG); actionableDynamicQuery.setInterval(batchSize); actionableDynamicQuery.performActions(); With that base boilerplate code, we can implement several bulk deletions for each model that accounts for each of those differences. @Component( properties = "model.class=abc.def.XYZ", service = ScheduledBulkOperation.class ) public class XYZScheduledBulkOperationByActionableDynamicQuery extends ScheduledBulkOperationByActionableDynamicQuery<XYZ> { } Step 4: Bypassing the Persistence Layer If you've worked with Liferay service builder, you know that almost all non-upgrade code that lives in Liferay's code base operates on entities one at a time. Naturally, anything implemented with ActionableDynamicQuery has the same limitation. This happens partly because there are no foreign keys (I don't know why this is the case either), partly because of an old incompatibility between Weblogic and Hibernate 3 (which was later addressed through a combination of LPS-29145 and LPS-41524, though the legacy setting lives on in hibernate.query.factory_class), and partly because we still notify model listeners one at a time. In theory, you can set the value of the legacy property to org.hibernate.hql.ast.ASTQueryTranslatorFactory to allow for Hibernate queries with bulk updates (among a lot of other nice features that are available in Hibernate by default, but not available in Liferay due to the default value of the portal property), and then use that approach instead of an ActionableDynamicQuery. That's what we're hoping to eventually be able to do with LPS-86407. However, if you know you don't have model listeners on the models you are working with (not always a safe assumption), a new option becomes available. You can choose to write everything with plain SQL outside of the persistence layer and not have to pay the Hibernate ORM cost, because nothing needs to know about the model. This brings us to our second implementation of a ScheduledBulkOperation: using plain SQL. With the exception of the deletions for CT_Analytics_AnalyticsReferrer (which is effectively a cascade delete, emulated with code), the mass deletion of each of the other five tables can be thought of as having the following form: DELETE FROM CT_TableName WHERE companyId = ? AND dateColumnName < ? Whether you delete in large batches or you delete in small batches, the query is the same. So let's assume that something provides us with a map where the key is a companyId, and the value is a sorted set of the timestamps you will use for the deletions, where the timestamps are pre-divided into the needed batch size. String deleteSQL = String.format( "DELETE FROM %s WHERE companyId = ? AND %s < ?", getTableName(), getDateColumnName()); try (Connection connection = dataSource.getConnection(); PreparedStatement ps = connection.prepareStatement(deleteSQL)) { connection.setAutoCommit(true); for (Map.Entry<Long, SortedSet<Timestamp>> entry : timestampsMap.entrySet()) { long companyId = entry.getKey(); SortedSet<Timestamp> timestamps = entry.getValue(); ps.setLong(1, companyId); for (Timestamp timestamp : timestamps) { ps.setTimestamp(2, timestamp); ps.executeUpdate(); clearCache(getTableName()); } } } So all that's left is to identify the breakpoints. In order to delete in small batches, choose the number of records that you want to delete in each batch (for example, 10000). Then, assuming you're running on a database other than MySQL 5.x, you can fetch the different breakpoints for those batches, though the modulus function will vary from database to database. SELECT companyId, dateColumnName FROM ( SELECT companyId, dateColumnName, ROW_NUMBER() OVER (PARTITION BY companyId ORDER BY dateColumnName) AS rowNumber FROM CT_TableName WHERE dateColumnName < ? ) WHERE MOD(rowNumber, 10000) = 0 ORDER BY companyId, dateColumnName If you're running a database like MySQL 5.x, you will need something similar to a stored procedure, or you can pull back all the companyId, dateColumnName records and discard anything that isn't located at a breakpoint. It's wasteful, but it's not that bad. Finally, you sequentially execute the mass delete query for each of the different breakpoint values (and treat the original value as one extra breakpoint) rather than just the final value by itself. With that, you've effectively broken up one transaction into multiple transactions, and it will happen as fast as the database can manage, without having to pay the ORM penalty. Expanding the Solution Now suppose you encounter the argument, "What happens if someone manually calls the method outside of the scheduled in order to clean up the older data?" At that point, overriding the sounds looks like a good idea. Since we already have a ScheduledBulkOperation implementation, and because service wrappers are implemented as OSGi components, the implementation is trivial. @Component(service = ServiceWrapper.class) public class CustomXYZEventLocalService extends XYZLocalServiceWrapper { public CustomXYZEventLocalService() { super(null); } @Override public void checkXYZ() throws PortalException { try { _scheduledBulkOperation.execute(); } catch (SQLException sqle) { throw new PortalException(sqle); } } @Reference( target = "(model.class=abc.def.XYZ)" ) private ScheduledBulkOperation _scheduledBulkOperation; } Over-Expanding the Solution With the code now existing in a service override, we have the following question: should we move the logic to whatever we use to override the service and then have the scheduled job consume the service rather than this extra ScheduledBulkOperation implementation? And if so, should we just leave the original scheduled job enabled? With the above solution already put together, it's not obvious why you would ask that question. After all, if you have the choice to not mix Spring and OSGi, why are you choosing to mix them together again? However, if you didn't declare the scheduled bulk update operation as its own component, and you had originally just embedded the logic inside of the listener, this question is perfectly natural to ask when you're refactoring for code reuse. Do you move the code to the service builder override, or do you create something else that both the scheduled job and the service consume? And it's not entirely obvious that you should almost never attempt the former. Evaluating Service Wrappers In order to know whether it's possible to consume our new service builder override from a scheduled job, you'll need to know the order of events for how a service wrapper is registered:
  1. OSGi picks up your component, which declares that it provides the ServiceWrapper.class service
  2. The ServiceTracker within ServiceWrapperRegistry is notified that your component exists
  3. The ServiceTracker creates a ServiceBag, passing your service wrapper as an argument
  4. The ServiceBag injects your service wrapper implementation into the Spring proxy object
Notice that when you follow the service wrapper tutorial, your service is not registered to OSGi under the interface it implements, because Liferay relies on the Spring proxy (not the original implementation) being published as an OSGi component. This is deliberate, because Liferay hasn't yet implemented a way to proxy OSGi components (though rumor has it that this is being planned for Liferay 7.2), and without that, you lose all of the benefits of the advices that are attached to services. However, as a side-effect of this, this means that even though no components are notified of a new implementation of the service, all components are transparently updated to use your new service wrapper once the switch completes. What about your scheduled job? Well, until your service wrapper is injected into the Spring proxy, your scheduled job will still be calling the original service. In other words, we're back to having a race condition between all of the dependency management frameworks. In order to fight against that race condition, you might consider manually registering the scheduled job after a delay, or duplicating the logic that exists in ServiceWrapperRegistry and ServiceBag and polling the proxy to find out when your service wrapper registered. However, all of that is really just hiding dependencies. Evaluating Bundle Fragments If you were still convinced that you could override the service and have your scheduled job invoke the service, you might consider overriding the existing service builder bean using a fragment bundle and ext-spring.xml, as described in a past blog entry by David Nebinger on OSGi Fragment Bundles. However, there are two key limitations of this approach.
  1. You need a separate bundle fragment for each of the four bundles.
  2. A bundle fragment can't register new OSGi components through Declarative Services.
The second limitation warrants additional discussion, because it's also another part of the OSGi learning curve. Namely, code that would work perfectly in a regular bundle will stop working if you move it to a bundle fragment, because a bundle fragment is never started (it only advances to the RESOLVED state). Since it's a service builder plugin, one workaround for DXP is to use a Spring bean, where the Spring bean will get registered to OSGi automatically later in the initialization cycle. However, choosing this strategy means you shouldn't add @Component to your scheduled job class (otherwise, it gets instantiated by both Spring and OSGi, and that will get messy), and there are a few things you need to keep in mind as you're trying to manage the fact that you're playing with two dependency management frameworks.
  • In order to get references to other Spring-managed components within the same bundle (for example, the service builder service you overrode), you should do it with ext-spring
  • In order to get references to Spring beans, you should use @BeanReference
  • In order to get references to OSGi components, you need to either (a) use static fields and ServiceProxyFactory, as briefly mentioned in the tutorial on Detecting Unresolved OSGi Components, or (b) use the Liferay registry API exported to the global classloader, as mentioned in the tutorial on Using OSGi Services from EXT Plugins
Evaluating Marketplace Overrides Of course, if you were to inject the new service using ext-spring.xml using a marketplace override, as described in a past blog entry by David Nebinger on Extending Liferay OSGi Modules, you're able to register new components just fine. However, there are still three key limitations of this approach.
  1. You need a separate bundle for each of the four marketplace overrides.
  2. Each code change requires a full server restart and a clean osgi/state folder.
  3. You need to be fully aware that the increased flexibility of a marketplace override is similar to the increased flexibility of an EXT plugin.
In theory, the reason the second limitation exists is because marketplace overrides are scanned by the same code that scans .lpkg folders rather than through a regular bundle scanning mechanism, and that scan happens only once and only happens during the module framework initialization. In theory, you might be able to work around it by adding the osgi/marketplace/override folder to the module.framework.auto.deploy.dirs portal property. However, I don't know how this actually works in practice, because I've quietly accepted the documentation that says that restarts are necessary. Reviewing the Solution To summarize, overriding Liferay scheduled jobs is fairly straightforward once you have all of the dependencies you need, assuming you're willing to accept the following two steps:
  1. Disable the existing scheduled job
  2. Create a new implementation of the work that scheduled job performs
If you reject these steps and try to play at the boundary of where different dependency management frameworks interact, you need to deal with the race conditions and complications that arise from that interaction. Minhchau Dang 2018-11-22T03:27:00Z
Categories: CMS, ECM

Web Summit 2018 Recap: From data to insights

SnapLogic - Wed, 11/21/2018 - 16:07

Earlier this month, SnapLogic CEO Gaurav Dhillon joined fellow CEOs Zander Lurie of SurveyMonkey and Jager McConnell of Crunchbase at the annual Web Summit conference in Lisbon, Portugal for a lively panel discussion on the opportunities and challenges around data. The session, “Big Data to Big Insights,” was moderated by Intellyx founder and Forbes contributor[...] Read the full article here. The post Web Summit 2018 Recap: From data to insights appeared first on SnapLogic.

Categories: ETL

5 Recipes for Not Becoming the Data Turkey of Your Organization

Talend - Wed, 11/21/2018 - 09:24

With Thanksgiving around the corner, it’s a perfect moment to take a step back and get some recipes to be data savvy within your organization. Fortunately, Talend experts have a recipe for data success that will help you to stay above the fray.

As companies become more data driven, being ahead of the curve will obviously be considered as a sign of curiosity and a way to differentiate. This is also a means for your company to anticipate incoming trends and thrive in a changing world where data has become the subject of concern and heavy regulations.

Follow these simple recipes to anticipate trends, follow regulations or better manage your data.

 

Recipe #1: Learn more about the Data Kitchen and how to be GDPR Compliant

Recent news is here to remind that not meeting data compliance standards can be damaging for any organization type.  As gdprtoday stated, data complaints appear to be widespread and it won’t stop here.   To better understand GPDR, avoid penalties and build proper data governance, follow the guidance of our GPDR Whitepaper that explains how to regain control of your data and get ready with data protection regulations.   Recipe #2:  Open the fridge and discover your Data While taking GDPR into consideration, it will also be the right time to identify how to better value the data you do have. To do that, you first need to understand what’s inside your data sources and assess it. Data profiling is the process of examining the data available in different data sources and collecting statistics and information about this data. It helps to assess the quality level of the data according to defined set goals. If data is of a poor quality or managed in structures that cannot be integrated to meet the needs of the enterprise, business processes and decision-making suffer. The best advice would be to read this post that explains the principals of Data Profiling. If you’re a data engineer, also follow this introduction to Talend Open Studio for Data Quality.   Recipe #3: Engage your guests, cook and enrich data together. You alone will have a hard time solving all your data quality problems. It can be far better to consider data as an organization priority and not as a sole IT responsibility.  Managing Data Quality beyond IT will involve different responsibilities in your organization to make your data strategy an enterprise wide success. This webinar will explain you the very first steps about collaborative data management. And if you don’t want to fail, this post will provide you with some good recommendations.   Recipe #4: Set the table and let the trust flow freely Once Data will be cleaned, you would need to provide your teams with a way to share and crawl datasets easily. Follow this webinar  about creating a single point of trust with the newly announced Talend Data Catalog. You’ll learn why a data catalog would benefit your entire company and how to take advantage of it.   Recipe #5: Don’t cook solo. Learn from experienced cooks. You may look for customer references or good recipes from companies in your industry. Don’t hesitate to download this guide to see how companies fight their data integrity challenges with modern Talend Tools.   Want a dessert?  Why not enjoying a good pecan pie  with this thought leadership IDC whitepaper about intelligent governance ? And if you’re still hungry, don’t hesitate to download our Definitive Guide to Data Quality. Happy Thanksgiving!   The post 5 Recipes for Not Becoming the Data Turkey of Your Organization appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

KNIME Fall Summit 2018, Austin: Summary

Knime - Wed, 11/21/2018 - 09:00
KNIME Fall Summit 2018, Austin: Summary hs Wed, 11/21/2018 - 15:00

From November 6 - 9, 2018 we held our 3rd KNIME Fall Summit in the US and, like last year, we were in Austin, Texas. Over 200 people joined us and it was great seeing so many new faces amongst the regular Summit attendees - thank you to everyone who joined us!

  Michael Berthold opened the Summit by reflecting on another year of significant growth - including the half a dozen new KNIMErs who have joined the US team in 2018. He also took a moment to thank the KNIME community for their ongoing contributions which helped us keep our position as a leader in analyst reports. Lastly, he discussed new and growing trends in Data Science, as well as in Automated ML/AI and put them into context with our Guided Analytics vision. Check out his presentation here. Thursday and Friday were full of engaging presentations from data scientists, business analysts, and independent users across different industries such as Marketing, HR, Sales, Manufacturing, Life Sciences, Cybersecurity, and more. They highlighted the diverse application areas of KNIME Software - from text mining, guided analytics, and disease tagging, to anomaly detection, image analysis, bots, and more. We also looked into what’s new and what’s cooking at KNIME. Thank you once again to our speakers for all these great presentations! You can check out their slides here. Based on feedback from last year, we extended the Summit by offering KNIME Courses on Tuesday as well as Wednesday. There were courses in KNIME Analytics Platform (for beginner and advanced users), Text Mining, and Big Data, as well IoT Analytics, and Advanced Analytics. Dean Abbott was also there delivering his course on “The Power of Random”. It was great hosting everyone in Austin and we are now looking forward to the KNIME Spring Summit from March 18 - 22, 2019, in Berlin, Germany. A 30% early bird discount is available for those who want to secure their spot early.   News date Wed, 11/21/2018 - 15:00
Categories: BI

New Mixed License Pricing: Pay for what your users really use

VTiger - Wed, 11/21/2018 - 01:01
In today’s increasingly competitive business environment, staying ahead of the game is getting harder. In addition to needing to market everywhere, and provide competitive products and services, businesses also need to provide an outstanding customer experience. Part of how they do the latter is with CRM Software – which, when used right, helps businesses acquire, […]
Categories: CRM

Removing the integration headache in M&A deals

SnapLogic - Tue, 11/20/2018 - 15:28

Originally published in Finance Monthly.  The merger and acquisition market is on track to hit record levels in 2018. According to Mergermarket, the first half of the year saw 8,560 deals recorded globally at a value of $1.94tn, with 26 deals falling into the megadeals category of over $10bn per deal. The landscape is littered[...] Read the full article here. The post Removing the integration headache in M&A deals appeared first on SnapLogic.

Categories: ETL

WordPress Admin Styles for CiviCRM with WordPress

CiviCRM - Mon, 11/19/2018 - 15:39

In the past few weeks we have been looking at how a more uniform user experience could be provided between CiviCRM and WordPress dashboards. We looked at potentially leveraging Shoreditch, but quickly realized that it's dependent on a Drupal theme and the readme clearly stated that is was only for Drupal. So we stepped back and looked at how we could do this with CSS changes that apply to the admin only, since this is not affected by the theme at all.

Categories: CRM

A Serverless Architecture for Big Data

Talend - Mon, 11/19/2018 - 10:53

This post is co-authored by Jorge Villamariona and Anselmo Barrero at Qubole. A popular term emerging from the software industry over the last few years is serverless computing, more commonly referred to as just “serverless”. So what does it mean? In its simplest form, a serverless architecture is a computing model where a service provider dynamically manages the allocation of computing resources based on a Service Level Agreement (SLA), provisioning and running resources only for the time needed and without requiring end-user involvement.  With a serverless architecture, the server provider would automatically increase computing capacity when demand for resources is high and would intelligently downscale when demand for resources goes down.  In this architecture, the end users only care about the tasks they want to execute (get a report, execute a query, execute a data pipeline, etc.) without the hassle of procuring, provisioning and managing the underlying infrastructure. Traditional vs. Serverless Architectures So, what are some major advantages for going serverless? Cost, scale and, environment options to start. Traditional architectures rely on the infrastructure administrator’s ability to estimate workloads and size hardware and software accordingly.  Moving to the cloud represents an improvement over on-premises architectures because it allows the infrastructure to scale on-demand.  However, administrators still need to be involved to define the conditions and rules to scale and manage the cloud infrastructure. The next step forward is to leverage a serverless architecture and allow the infrastructure to automatically decide behind the scenes when to provision, scale and decommission resources as workloads change.  Qubole is a great example of a serverless architecture. The Qubole platform automatically determines the infrastructure needed and scales it intelligently based on the workloads and SLAs.  As a result, Qubole’s serverless architecture saves customers over 50% in annual infrastructure costs compared to traditional and other managed cloud big data architectures. This intelligent automation allows Qubole to process over an exabyte of data per month for customers deploying AI, machine learning, and analytics without requiring customers to provision and manage any infrastructure Value of adopting a serverless architecture for Big Data Big data deals with large volumes of data arriving at high speed which makes it difficult and inefficient to estimate the infrastructure required for processing it ahead of time.   On-premises infrastructures impose limits in processing power, are expensive, and complex to manage and maintain. Deploying Big Data in the cloud on your own or as a managed service from cloud providers (Amazon AWS, Microsoft Azure, Google Cloud, etc) improves the processing limitations and eases capex, but it creates overhead managing and attempting to optimize the infrastructure.  Improper utilization, underutilization, or overutilization on certain time periods can lead to cloud costs that are much higher than on-premises processing. This, combined with scarce skilled resources results in a very low success rate of only 15% for all big data projects according to Gartner  To successfully leverage a serverless platform for big data you need to look for a solution that addresses the following questions:

  • Will it reduce big data infrastructure costs?
  • Does it provide automation and resources to execute data pipelines and provide analytics at any scale?
  • Will it reduce operational costs?
  • Will it help my data team scale and not be overrun by business demands for data? 
A serverless platform like Qubole is very appealing to teams deploying big data because it addresses the factors that cause big data projects to fail since it reduces infrastructure complexity and costs, as well as reliance on scarce experts.    Qubole reduces the administration overhead by providing a simple interface to define the run-time characteristics of big data engines. Users only need to specify the minimum and maximum clusters size, whether to leverage spot instances (in the case of AWS) and the cluster composition to meet their price performance objectives. Qubole then takes over and automatically manages the infrastructure based on the business  requirements  and the workloads’ SLA without the need for further manual intervention Qubole’s serverless architecture auto-scales to avoid latencies when dealing with large bursty incoming loads and it also down-scales to avoid idle wasted resources.  Qubole can scale from 5 nodes up to 200 nodes in less than 5 minutes. For reference, Qubole also manages the largest Spark cluster in the cloud (500+ nodes). TCO of a Serverless Big Data Architecture  When it comes to pricing, Qubole’s serverless architecture offers the best performance by adding computing capacity only when needed and orderly downscaling it as soon as resources become idle.  With Qubole there are no infrastructure administration overheads or cloud resources overspent. Additionally, as we can see in the chart above, data teams leveraging Qubole don’t suffer from delays in provisioning computing resources when workloads suddenly increase.  The combination of Talend Cloud and Qubole not only lowers infrastructure costs, but also increases the productivity of the data team, since they don’t need to worry about cluster procurement, configuration, and management. Data teams build their data pipelines in Talend Cloud and push their execution to the Qubole serverless platform, all without having to write complex code or managing infrastructure. This partnership allows these teams to focus on building highly functioning end-to-end data pipelines, allowing data scientists to deploy faster IOT, machine learning and advanced analytics applications that have high impact on the business. With Talend and Qubole data teams build scalable serverless data pipelines, that work at low operating costs while often being engineered and maintained by single developers.  This cost reduction makes the benefits of big data more accessible to a wider audience. To learn more about Qubole and test-drive the serverless platform visit https://www.qubole.com/lp/testdrive/ About the Authors Jorge Villamariona works for the Product Marketing team at Qubole.  Over the years Mr. Villamariona has acquired extensive experience in relational databases, business intelligence, big data engines, ETL,  and CRM systems. Mr. Villamariona enjoys complex data challenges and helping customers gain greater insight and value from their existing data.     Anselmo Barrero is a Director of Business Development at Qubole with more than 25 years of experience in IT and three patents granted. Mr. Barrero is passionate about building products and strategic partnerships to address market opportunities. He has created products that yield more than 50% YoY growth and established strategic partnerships in areas such as Data Warehouse that resulted in more than 100% consecutive YoY growth.   In his current role Mr. Barrero is responsible for establishing strategic partnerships in big data and the cloud to allow customers reduce the cost and time of getting value out of their data The post A Serverless Architecture for Big Data appeared first on Talend Real-Time Open Source Data Integration Software.
Categories: ETL

Getting Started with Talend Open Studio: Building Your First Job

Talend - Sun, 11/18/2018 - 11:02

In the previous blog, we walked through the installation and set-up of Talend Open Studio and briefly demonstrated key features to familiarize you with the Studio interface. In this blog, we will build a simple job to load data from a local file into Snowflake, a cloud data warehouse technology. More specifically, we will build a new job that takes customer data from your local machine and maps it to a target table within Snowflake. To follow along in this tutorial, you will need Talend Open Studio for Data Integration (download here), some customer data (either use your own customer data or generate some dummy data), and a Snowflake data warehouse with a database already created. If you don’t have access to a Snowflake data warehouse, you can use another relational database technology. To see a video of this tutorial, please feel free to see our step-by-step webinar—just skip to the third video. To get started, right-click within the Job Designs folder in the repository and create a new folder titled “Data Integration” to house your job. Next, dive into that new folder and choose “create a job” and name your job Customer_Load. As a best practice, enter the job’s intended purpose and a general description of its overall function. Once you click finish, the new job will be available within your new folder. Bringing Data from a CSV into a Talend Job Before building out your flow, bring your customer data into the Repository. To do this, create a new file delimited element within your Metadata folder by right-clicking on the File Delimited button and choosing “Create File Delimited.” Then, name the file “Customer” and click Next. From there, browse to locate your customer data file. Once selected, the data is visible within the File Viewer. To define the settings and your data elements, click “Next”. In the next window, choose to use a comma as a field separator. Because we selected a CSV file, set the Escape Char setting to CSV and the Text Enclosure to be double quotes. Make sure to check “Set heading row as column names” before proceeding. Now the customer data is imported and organized. One final time, click “Next” to confirm your data schema. Talend Open Studio will guess each column’s type based on each column’s contents—be sure to double check that everything is correct. After checking this data set, you can see that Talend Open Studio guessed the “phone2” column was a date, which is incorrect, so instead, change it to String and then click Finish. Next, you can drag your Customers delimited file onto the Design window as a tFileInputDelimited component. This brings your customer data into the Talend job. Creating Your Snowflake Connection Next, you need to create a new connection to your existing Snowflake table. First, find the Snowflake heading in the Repository and right-click to “Create a new connection”. Give your new connection a name, and then enter your account name (so if your Snowflake URL is talend.snowflake.com, your account name would be talend), User ID, and password. Also identify the Snowflake warehouse, schema and database you will be moving your data to. After you input all of the necessary information, test the connection and make sure it is successful. Following a successful connection to Snowflake, select from the listed tables those you want to be added to the Talend Repository for this connection then click finish. This will import the schema of those tables from Snowflake into the Repository. From there, you can now choose your table of interest from the repository (in this case, Customers) and drag and drop it into the Design window as a tSnowflakeOutput component. As a side note, we have chosen to use an existing table in this tutorial; however, you can also use Talend to create a table in an existing database. To map the source data (customer file) to the target table (Snowflake), add a tMap component by clicking within the Design window and searching for “tMap”. The tMap component is a very robust component that can be used for a wide range of functions, but for now, we will be using it to simply link the fields between two tables (to learn more about tMap, stay tuned for the next blog in the series). To start using the tMap, connect the CSV file to tMap by dragging the orange arrow from the file delimited component to the tMap. Next, to connect the tMap to your Snowflake output, right click on tMap and select Row, and click *New Output* to create a new output connection and give it a name like “Customers”. Then, select “Yes” when asked whether you would like to get the schema of the target component. Your Design window should now look like this:  To configure the tMap component, double-click on the component itself within the Design Window to reveal the input and output tables. Here you must link both table columns together. You can either drag and drop to link each corresponding field individually, or select Automap, which works great in this case to link the fields between the two tables. Make any adjustments necessary. Once you have ensured the types have been properly auto-selected, click Ok to save this configuration. If you haven’t installed the additional licenses yet, this Snowflake output component will flash an error. If that’s the case, simply select to install the additional packages which are located within the Help drop-down. You’re now ready to run the job and populate the data tables within Snowflake. Within the Run tab, simply click Run. You can watch the process run from start to finish within Studio, pushing 500 rows out to Snowflake. Once the run has been completed successfully, you can head to your Snowflake account. In this example, you can see that 500 records were successfully processed through Talend Studio and loaded into your Snowflake Cloud Data Warehouse. And that’s how to build your first job within Talend Open Studio. In our next blog, we will go through some more complex functionalities of tMap, and we will also give a few tips on running and debugging your Talend jobs. Please leave a comment and let us know if there are any other things that would help you get started on Talend Open Studio. The post Getting Started with Talend Open Studio: Building Your First Job appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Vtiger’s Most Awaited Release: Sales Enterprise Edition

VTiger - Sat, 11/17/2018 - 23:50
Each year a multitude of organizations spend huge sums of money trying to find better ways to grow sales. This in itself is a testimony as to how important sales can be to organizations. From a customer’s standpoint, your sales team is their first point of contact with your organization. The extent of brand loyalty […]
Categories: CRM

Changing OSGi References

Liferay - Fri, 11/16/2018 - 12:53

So we've all seen those @Reference annotations scattered throughout the Liferay code, and it can almost seem like those references are not changeable.

In fact, this is not really true at all.

The OSGi Configuration Admin service can be used to change the reference binding without touching the code.

Let's take a look at a contrived example from Liferay's com.liferay.blogs.demo.internal.BlogsDemo class. This class has a number of @Reference injections for different types of demo content generators. One of those is declared as: @Reference(target = "(source=lorem-ipsum)", unbind = "-") protected void setLoremIpsumBlogsEntryDemoDataCreator( BlogsEntryDemoDataCreator blogsEntryDemoDataCreator) { _blogsEntryDemoDataCreators.add(blogsEntryDemoDataCreator); } So in this example, the com.liferay.blogs.demo.data.creator.internal.LoremIpsumBlogsEntryDemoDataCreatorImpl is registered with a property, "source=lorem-ipsum", and it can generate content for a blogs entry demo. Let's say that we have our own demo data creator, com.example.KlingonBlogsEntryDemoDataCreatorImpl that generates blog entries in Klingon (it has "source=klingon" defined for its property), and we want the blogs demo class to not use the lorem-ipsum version, but instead use our klingon variety. How can we do this? Well, BlogsDemo is a component, so we could create a copy of it and change the relevant code to @Reference ours, but this seems kind of like overkill. A much easier way would be to get OSGi to just bind to our instance rather than the original. This is actually quite easy to do. First we will need to create a configuration admin override file in osgi/config named after the full class name but with a .config extension. So we need to create an osgi/config/com.liferay.blogs.demo.internal.BlogsDemo.config file. This file will have our override for the reference to bind to, but we need to get some more details for that. We need to know the name for the field that we're going to be setting, that will be part of the configuration change. This will actually come from what the @Reference decorates. If @Reference is on a field, the field name will be the name you need; if it is on a setter, the name will be the setter method name without the leading "set" prefix. So, from above, since we have setLoremIpsumBlogsEntryDemoDataCreator(), our field name will be "LoremIpsumBlogsEntryDemoDataCreator". To change the target, we'll need to add a line to our config file with the following: LoremIpsumBlogsEntryDemoDataCreator.target="(source\=klingon)" This will effectively change the target string from the old (source=lorem-ipsum) to the new (source=klingon). So this is how we can basically change up the wiring w/o really overriding a line of code. You can even take this further. With a simple @Reference annotation w/o a target filter, you can add a target filter to bind a different reference. This could be an alternative to relying on a higher service ranking for binding. For those cases where a service tracker is being used to track a list of entities, you can use this technique to exclude one or more references that you don't want to have the service tracker capture. So actually I didn't come up with all of this myself.  It's actually an adaptation of https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-service-references#configure-the-component-to-use-the-custom-service to demonstrate just how that can be used to change the wiring. David H Nebinger 2018-11-16T17:53:00Z

Categories: CMS, ECM

Accessing Services in JSPs

Liferay - Fri, 11/16/2018 - 10:21
Introduction

When developing JSP-based portlets for OSGi deployment, and even when doing JSP fragment bundle overrides, it is often necessary to get service references in the JSP pages. But OSGi @Reference won't work in the JSP files, so we need ways to expose the services so they can be accessed in the JSPs...

Retrieving Services in the JSP

So we're going to work this a little backwards, we're going to cover how to get the service reference in the JSP itself.

In order to get the references, we're going to use a scriptlet to pull the reference from the request attributes, similar to :

<% TrashHelper trashHelper = (TrashHelper) request.getAttribute(TrashHelper.class.getName()); %>

The idea is that we will be pulling the reference directly out of the request attributes. We need to cast the object coming from the attributes to the right type, and we'll be following the Liferay standard of using the full class name as the attribute key.

The challenge is how to set the attribute into the request.

Setting Services in a Portlet You Control

So when you control the portlet code, injecting the service reference is pretty easy.

In your portlet class, you're going to add your @Reference for the service you need to pass. Your portlet class would include something along the lines of:

@Reference(unbind = "-") protected void setTrashHelper(TrashHelper trashHelper) { this._trashHelper = trashHelper; } private TrashHelper _trashHelper;

With the reference available, you'll then override the render() method to set the attribute:

@Override public void render(RenderRequest renderRequest, RenderResponse renderResponse) throws IOException, PortletException { renderRequest.setAttribute(TrashHelper.class.getName(), _trashHelper); super.render(renderRequest, renderResponse); }

So this sets the service as an attribute in the render request. On the JSP side, it would be able to get the service via the code shared above.

Setting Services in a Portlet You Do Not Control

So you may need to build a JSP fragment bundle to override JSP code, and in your override you need to add a service which was not injected by the core portlet.  It would be kind of overkill to override the portlet just to inject missing services.

So how can you inject the services you need? A portlet filter implementation!

Portlet filters are similar to the old servlet filters, they are used to wrap the invocation of an underlying portlet. And, like servlet filters, can make adjustments to requests/responses on the way into the portlet as well as on the way out.

So we can build a portlet filter component and inject our service reference that way...

@Component( immediate = true, property = "javax.portlet.name=com_liferay_dictionary_web_portlet_DictionaryPortlet", service = PortletFilter.class ) public class TrashHelperPortletFilter implements RenderFilter { @Override public void doFilter(RenderRequest renderRequest, RenderResponse renderResponse, FilterChain filterChain) throws IOException, PortletException { filterChain.doFilter(renderRequest, renderResponse); renderRequest.setAttribute(TrashHelper.class.getName(), _trashHelper); } @Reference(unbind = "-") protected void setTrashHelper(TrashHelper trashHelper) { this._trashHelper = trashHelper; } private TrashHelper _trashHelper; }

So this portlet filter is configured to bind to the Dictionary portlet. It will be invoked at each portlet render since it implements a RenderFilter. The implementation calls through to the filter chain to invoke the portlet, but on the way out it adds the helper service to the request attributes.

Conclusion

So we've seen how we can use OSGi services in the JSP files indirectly via request attribute injection. In portlets we control, we can inject the service directly. For portlets we do not control, we can use a portlet filter to inject the service too.

David H Nebinger 2018-11-16T15:21:00Z
Categories: CMS, ECM

Pro Liferay Deployment

Liferay - Fri, 11/16/2018 - 00:15
Introduction

The official Liferay deployment docs are available here: https://dev.liferay.com/discover/deployment They make it easy for folks new to Liferay to get the system up and running and work through all of the necessary configuration. But it is not the process followed by professionals. I wanted to share the process I use that it might provide an alternative set of instructions that you can use to build out your own production deployment process. The Bundle Like the Liferay docs, you may want to start from a bundle; always start from the latest bundle you can. It is, for the most part, a working system that may be ready to go. I say for the most part because many of the bundles are older versions of the application servers. This may or may not be a concern for your organization, so consider whether you need to update the application server. You'll want to explode the bundle so all of the files are ready to go. If you are using DXP, you'll want to download and apply the latest fixpack. Doing this before the first start will ensure that you won't need to deal with an upgrade later on. The Database You will, of course, need a database for Liferay to connect to and set up. I prefer to create the initial database using database specific tools. One key aspect to keep in mind is that the database must be set up for UTF-8 support as Liferay will be storing UTF-8 content. Here's examples for what I use for MySQL/MariaDB: create database lportal character set utf8; grant all privileges on lportal.* to 'MyUser’@‘192.168.1.5' identified by 'myS3cr3tP4sswd'; flush privileges; Here's the example I use for Postgres: create role MyUser with login password 'myS3cr3tP4sswd'; alter role MyUser createdb; alter role MyUser superuser; create database lportal with owner 'MyUser' encoding 'UTF8' LC_COLLATE='en_US.UTF-8' LC_CTYPE='en_US.UTF-8' template template0; grant all privileges on database lportal to MyUser; There's other examples available for other databases, but hopefully you get the gist. From an enterprise perspective, you'll have things to consider such as a backup strategy, possibly a replication strategy, a cluster strategy, ... These things will obviously depend upon enterprise needs and requirements and are beyond the scope of this blog post. Along with the database, you'll need to connect the appserver to the database. I always want to go for the JNDI database configuration rather than sticking the values in the portal-ext.properties. The passwords are much more secure in the JNDI database configuration. For tomcat, this means going into the conf/Catalina/localhost directory and editing the ROOT.xml file as such: <Resource name="jdbc/LiferayPool" auth="Container" type="javax.sql.DataSource" factory="com.zaxxer.hikari.HikariJNDIFactory" minimumIdle="5" maximumPoolSize="10" connectionTimeout="300000" dataSource.user="MyUser" dataSource.password="myS3cr3tP4sswd" driverClassName="org.mariadb.jdbc.Driver" dataSource.implicitCachingEnabled="true" jdbcUrl="jdbc:mariadb://dbserver/lportal?characterEncoding=UTF-8&dontTrackOpenResources=true&holdResultsOpenOverStatementClose=true&useFastDateParsing=false&useUnicode=true" /> Elasticsearch Elasticsearch is also necessary, so the next step is to stand up your ES solution. Could be one node or a cluster. Get your ES system set up and collect your IP address(es). Verify that firewall rules allow for connectivity from the appserver(s) to the ES node(s). With the ES servers, create an ES configuration file, com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config in the osgi/config directory and set the contents: operationMode="REMOTE" clientTransportIgnoreClusterName="false" indexNamePrefix="liferay-" httpCORSConfigurations="" additionalConfigurations="" httpCORSAllowOrigin="/https?://localhost(:[0-9]+)?/" networkBindHost="" transportTcpPort="" bootstrapMlockAll="false" networkPublishHost="" clientTransportSniff="true" additionalIndexConfigurations="" retryOnConflict="5" httpCORSEnabled="true" clientTransportNodesSamplerInterval="5s" additionalTypeMappings="" logExceptionsOnly="true" httpEnabled="true" networkHost="[_eth0_,_local_]" transportAddresses=["lres01:9300","lres02:9300"] clusterName="liferay" discoveryZenPingUnicastHostsPort="9300-9400" Obviously you'll need to edit the contents to use local IP address(es) and/or name(s). This can and should all be set up before the Liferay first start. Portal-ext.properties Next is the portal-ext.properties file. Below is the one that I typically start with as it fits most of the use cases for the portal that I've used. All properties are documented here: https://docs.liferay.com/ce/portal/7.0/propertiesdoc/portal.properties.html company.default.web.id=example.com company.default.home.url=/web/example default.logout.page.path=/web/example default.landing.page.path=/web/example admin.email.from.name=Example Admin admin.email.from.address=admin@example.com users.reminder.queries.enabled=false session.timeout=5 session.timeout.warning=0 session.timeout.auto.extend=true session.tracker.memory.enabled=false permissions.inline.sql.check.enabled=true layout.user.private.layouts.enabled=false layout.user.private.layouts.auto.create=false layout.user.public.layouts.enabled=false layout.user.public.layouts.auto.create=false layout.show.portlet.access.denied=false redirect.url.security.mode=domain browser.launcher.url= index.search.limit=2000 index.filter.search.limit=2000 index.on.upgrade=false setup.wizard.enabled=false setup.wizard.add.sample.data=off counter.increment=2000 counter.increment.com.liferay.portal.model.Layout=10 direct.servlet.context.reload=false search.container.page.delta.values=20,30,50,75,100,200 com.liferay.portal.servlet.filters.gzip.GZipFilter=false com.liferay.portal.servlet.filters.monitoring.MonitoringFilter=false com.liferay.portal.servlet.filters.sso.ntlm.NtlmFilter=false com.liferay.portal.servlet.filters.sso.opensso.OpenSSOFilter=false com.liferay.portal.sharepoint.SharepointFilter=false com.liferay.portal.servlet.filters.validhtml.ValidHtmlFilter=false blogs.pingback.enabled=false blogs.trackback.enabled=false blogs.ping.google.enabled=false dl.file.rank.check.interval=-1 dl.file.rank.enabled=false message.boards.pingback.enabled=false company.security.send.password=false company.security.send.password.reset.link=false company.security.strangers=false company.security.strangers.with.mx=false company.security.strangers.verify=false #company.security.auth.type=emailAddress company.security.auth.type=screenName #company.security.auth.type=userId field.enable.com.liferay.portal.kernel.model.Contact.male=false field.enable.com.liferay.portal.kernel.model.Contact.birthday=false terms.of.use.required=false # ImageMagick imagemagick.enabled=false #imagemagick.global.search.path[apple]=/opt/local/bin:/opt/local/share/ghostscript/fonts:/opt/local/share/fonts/urw-fonts imagemagick.global.search.path[unix]=/usr/bin:/usr/share/ghostscript/fonts:/usr/share/fonts/urw-fonts #imagemagick.global.search.path[windows]=C:\\Program Files\\gs\\bin;C:\\Program Files\\ImageMagick # OpenOffice # soffice -headless -accept="socket,host=127.0.0.1,port=8100;urp;" openoffice.server.enabled=true # xuggler xuggler.enabled=true #hibernate.jdbc.batch_size=0 hibernate.jdbc.batch_size=200 cluster.link.enabled=true ehcache.cluster.link.replication.enabled=true cluster.link.channel.properties.control=tcpping.xml cluster.link.channel.properties.transport.0=tcpping.xml cluster.link.autodetect.address=dbserver company.security.auth.requires.https=true main.servlet.https.required=true atom.servlet.https.required=true axis.servlet.https.required=true json.servlet.https.required=true jsonws.servlet.https.required=true spring.remoting.servlet.https.required=true tunnel.servlet.https.required=true webdav.servlet.https.required=true rss.feeds.https.required=true dl.store.impl=com.liferay.portal.store.file.system.AdvancedFileSystemStore Okay, so first of all, don't just copy this into your portal-ext.properties file as-is. You'll need to edit it for names, sites, addresses, etc. It also enables clusterlink and sets up use of https as well as the advanced filesystem store. I tend to use TCPPING for my ClusterLink configuration as unicast doesn't have some of the connectivity issues. I use a standard configuration (seen below), and use the tomcat setenv.sh file to specify the initial hosts. <!-- TCP based stack, with flow control and message bundling. This is usually used when IP multicasting cannot be used in a network, e.g. because it is disabled (routers discard multicast). Note that TCP.bind_addr and TCPPING.initial_hosts should be set, possibly via system properties, e.g. -Djgroups.bind_addr=192.168.5.2 and -Djgroups.tcpping.initial_hosts=192.168.5.2[7800] author: Bela Ban --> <config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd"> <TCP bind_port="7800" recv_buf_size="${tcp.recv_buf_size:5M}" send_buf_size="${tcp.send_buf_size:5M}" max_bundle_size="64K" max_bundle_timeout="30" use_send_queues="true" sock_conn_timeout="300" timer_type="new3" timer.min_threads="4" timer.max_threads="10" timer.keep_alive_time="3000" timer.queue_max_size="500" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="8" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="true" thread_pool.queue_max_size="10000" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="discard"/> <TCPPING async_discovery="true" initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}" port_range="2"/> <MERGE3 min_interval="10000" max_interval="30000"/> <FD_SOCK/> <FD timeout="3000" max_tries="3" /> <VERIFY_SUSPECT timeout="1500" /> <BARRIER /> <pbcast.NAKACK2 use_mcast_xmit="false" discard_delivered_msgs="true"/> <UNICAST3 /> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="4M"/> <pbcast.GMS print_local_addr="true" join_timeout="2000" view_bundling="true"/> <MFC max_credits="2M" min_threshold="0.4"/> <FRAG2 frag_size="60K" /> <!--RSVP resend_interval="2000" timeout="10000"/--> <pbcast.STATE_TRANSFER/> </config> Additionally, since I want to use the Advanced Filesystem Store, I need a osgi/config/com.liferay.portal.store.file.system.configuration.AdvancedFileSystemStoreConfiguration.cfg file with the following contents: ## ## To apply the configuration, place this file in the Liferay installation's osgi/modules folder. Make sure it is named ## com.liferay.portal.store.file.system.configuration.AdvancedFileSystemStoreConfiguration.cfg. ## rootDir=/liferay/document_library JVM & App Server Config So of course I use the Deployment Checklist to configure JVM, GC and memory configuration. I do prefer to use at least an 8G memory partition. Also I add the JGroups initial hosts. Conclusion Conclusion? But we haven't really started up the portal yet, how can this be the conclusion? And that is really the point. All configuration is done before the portal is launched. Any other settings that could be changed in the System Settings control panel, well those I would also create the osgi/config file(s) to hold the settings. The more that is done in configuration pre-launch, the less likelihood there is of getting unnecessary data loaded, user public/private layouts that might not be needed, proper filesystem store out of the gate, ... It really is how the pros get their Liferay environments up and running... David H Nebinger 2018-11-16T05:15:00Z

Categories: CMS, ECM

Beachbody Gets Data Management in Shape with Talend Solutions

Talend - Thu, 11/15/2018 - 21:20

This post is co-authored by Hari Umapathy, Lead Data Engineer at Beachbody and Aarthi Sridharan, Sr.Director of Data (Enterprise Technology) at Beachbody. Beachbody is a leading provider of fitness, nutrition, and weight-loss programs that deliver results for our more than 23 million customers. Our 350,000 independent “coach” distributors help people reach their health and financial goals. The company was founded in 1998, and has more than 800 employees. Digital business and the management of data is a vital part of our success. We average more than 5 million monthly unique visits across our digital platforms, which generates an enormous amount of data that we can leverage to enhance our services, provide greater customer satisfaction, and create new business opportunities. Building a Big Data Lake One of our most important decisions with regard to data management was deploying Talend’s Real Time Big Data platform about two years ago. We wanted to build a new data environment, including a cloud-based data lake, that could help us manage the fast-growing volumes of data and the growing number of data sources. We also wanted to glean more and better business insights from all the data we are gathering, and respond more quickly to changes. We are planning to gradually add at least 40 new data sources, including our own in-house databases as well as external sources such as Google Adwords, Doubleclick, Facebook, and a number of other social media sites. We have a process in which we ingest data from the various sources, store the data that we ingested into the data lake, process the data and then build the reporting and the visualization layer on top of it. The process is enabled in part by Talend’s ETL (Extract, Transform, Load) solution, which can gather data from an unlimited number of sources, organize the data, and centralize it into a single repository such as a data lake. We already had a traditional, on-premise data warehouse, which we still use, but we were looking for a new platform that could work well with both cloud and big data-related components, and could enable us to bring on the new data sources without as much need for additional development efforts. The Talend solution enables us to execute new jobs again and again when we add new data sources to ingest in the data lake, without having to code each time. We now have a practice of reusing the existing job via a template, then just bringing in a different set of parameters. That saves us time and money, and allows us to shorten the turnaround time for any new data acquisitions that we had to do as an organization. The Results of Digital Transformation For example, whenever a business analytics team or other group comes to us with a request for a new job, we can usually complete it over a two-week sprint. The data will be there for them to write any kind of analytics queries on top of it. That’s a great benefit. The new data sources we are acquiring allow us to bring all kinds of data into the data lake. For example, we’re adding information such as reports related to the advertisements that we place on Google sites, the user interaction that has taken place on those sites, and the revenue we were able to generate based on those advertisements. We are also gathering clickstream data from our on-demand streaming platform, and all the activities and transactions related to that. And we are ingesting data from the Salesforce.com marketing cloud, which has all the information related to the email marketing that we do. For instance, there’s data about whether people opened the email, whether they responded to the email and how. Currently, we have about 60 terabytes of data in the data lake, and as we continue to add data sources we anticipate that the volume will at least double in size within the next year. Getting Data Management in Shape for GDPR One of the best use cases we’ve had that’s enabled by the Talend solution relates to our efforts to comply with the General Data Protection Regulation (GDPR). The regulation, a set of rules created by the European Parliament, European Council, and European Commission that took effect in May 2018, is designed to bolster data protection and privacy for individuals within the European Union (EU). We leverage the data lake whenever we need to quickly access customer data that falls under the domain of GDPR. So when a customer asks us for data specific to that customer we have our team create the files from the data lake. The entire process is simple, making it much easier to comply with such requests. Without a data lake that provides a single, centralized source of information, we would have to go to individual departments within the company to gather customer information. That’s far more complex and time-consuming. When we built the data lake it was principally for the analytics team. But when different data projects such as this arise we can now leverage the data lake for those purposes, while still benefiting from the analytics use cases. Looking to the Future Our next effort, which will likely take place in 2019, will be to consolidate various data stores within the organization with our data lake. Right now different departments have their own data stores, which are siloed. Having this consolidation, which we will achieve using the Talend solutions and the automation these tools provide, will give us an even more convenient way to access data and run business analytics on the data. We are also planning to leverage the Talend platform to increase data quality. Now that we’re increasing our data sources and getting much more into data analytics and data science, quality becomes an increasingly important consideration. Members of our organization will be able to use the data quality side of the solution in the upcoming months. Beachbody has always been an innovative company when it comes to gleaning value from our data. But with the Talend technology we can now take data management to the next level. A variety of processes and functions within the company will see use cases and benefits from this, including sales and marketing, customer service, and others. About the Authors:  Hari Umapathy Hari Umapathy is a Lead Data Engineer at Beachbody working on architecting, designing and developing their Data Lake using AWS, Talend, Hadoop and Redshift.  Hari is a Cloudera Certified Developer for Apache Hadoop.  Previously, he worked at Infosys Limited as a Technical Project Lead managing applications and databases for a huge automotive manufacturer in the United States.  Hari holds a bachelor’s degree in Information Technology from Vellore Institute of Technology, Vellore, India.   Aarthi Sridharan Aarthi Sridharan is the Sr.Director of Data (Enterprise Technology) at Beachbody LLC,  a health and fitness company in Santa Monica. Aarthi’s leadership drives the organization’s abilities to make data driven decisions for accelerated growth and operational excellence. Aarthi and her team are responsible for ingesting and transforming large volumes of data into traditional enterprise data warehouse and into the data lake and building analytics on top it.  The post Beachbody Gets Data Management in Shape with Talend Solutions appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Boosting Search

Liferay - Thu, 11/15/2018 - 17:43
Introduction

A client recently was moving off of Google Search Appliance (GSA) on to Liferay and Elasticsearch. One key aspect of GSA that they relied on though, was KeyMatch.

What is KeyMatch? Well, in GSA an administrator can define a list of specific keywords and assign content to them. When a user performs a search that includes one of the specific keywords, the associated content is boosted to the top of the search results.

This way an admin can ensure that a specific piece of content can be promoted as a top result.

For example, you run a bakery. During holidays, you have specially decorated cakes and cupcakes. You might do a KeyMatch search for "cupcake" to your specialty cupcakes so when a user searches, they get the specialty cakes over your normal cupcakes.

Elasticsearch Tuning

So Elasticsearch, the heart of the Liferay search facilities, does not have KeyMatch support. In fact, often it may seem that there is little search result tuning capabilities at all. In fact, this is not the case.

There are tuning opportunities for Elasticsearch, but it does take some effort to get the outcomes you're hoping for.

Tag Boosting

So one way to get a result similar to KeyMatch would be to boost the match for tags.

In our bakery example above, all of our contents related to cupcakes will, of course, appear as search results for "cupcake" if only because the keyword is part of our content. Tagging content with "cupcake" would also get it to come up as a search result, but may not make it score high enough to make them stand out as results.

We could, however, use tag boosting so that a keyword match on a tag would push a match to the top of the search results.

So how do you implement a tag boost? Through a custom IndexPostProcessor implementation.

Here's one that I whipped up that will boost tag matches by 100.0:

@Component( immediate = true, property = { "indexer.class.name=com.liferay.journal.model.JournalArticle", "indexer.class.name=com.liferay.document.library.kernel.model.DLFileEntry" }, service = IndexerPostProcessor.class ) public class TagBoostIndexerPostProcessor extends BaseIndexerPostProcessor implements   IndexerPostProcessor { @Override public void postProcessFullQuery(BooleanQuery fullQuery, SearchContext searchContext)   throws Exception { List<BooleanClause<Query>> clauses = fullQuery.clauses(); if ((clauses == null) || (clauses.isEmpty())) { return; } Query query; BooleanQueryImpl queryImpl; for (BooleanClause<Query> clause : clauses) { query = clause.getClause(); updateBoost(query); } } protected void updateBoost(final Query query) { if (query instanceof BooleanClauseImpl) { BooleanClauseImpl<Query> booleanClause = (BooleanClauseImpl<Query>) query; updateBoost(booleanClause.getClause()); } else if (query instanceof BooleanQueryImpl) { BooleanQueryImpl booleanQuery = (BooleanQueryImpl) query; for (BooleanClause<Query> clause : booleanQuery.clauses()) { updateBoost(clause.getClause()); } } else if (query instanceof WildcardQueryImpl) { WildcardQueryImpl wildcardQuery = (WildcardQueryImpl) query; if (wildcardQuery.getQueryTerm().getField().startsWith(Field.ASSET_TAG_NAMES)) { query.setBoost(100.0f); } } else if (query instanceof MatchQuery) { MatchQuery matchQuery = (MatchQuery) query; if (matchQuery.getField().startsWith(Field.ASSET_TAG_NAMES)) { query.setBoost(100.0f); } } } }

So this is an IndexPostProcessor implementation that is bound to all JournalArticles and DLFileEntries. When a search is performed, the postProcessFullQuery() method will be invoked with the full query to be processed and the search context. The above code will be used to identify all tag matches and will increase the boost for them.

This implementation uses recursion because the passed in query is actually a tree; processing via recursion is an easy way to visit each node in the tree looking for matches on tag names.

When a match is found, the boost on the query is set to 100.0.

Using this implementation, if a single article is tagged with "cupcake", a search for "cupcake" will cause those articles with the tag to jump to the top of the search results.

Other Modification Ideas

This is an example of how you can modify the search before it is handed off to Elasticsearch for processing.

It can be used to remove query items, change query items, add query items, etc.

It can also be used to adjust the query filters to exclude items from search results.

Conclusion

So the internals of the postProcessFullQuery() method and arguments are not really documented, at least not anywhere in detail that I could find for adjusting the query results.

Rather than reading through the code for how the query is built, when I was creating this override, I actually used a debugger to check the nodes of the tree to determine types, fields, etc.

I hope this will give you some ideas about how you too might adjust your search queries in ways to manipulate search results to get the ordering you're looking for.

David H Nebinger 2018-11-15T22:43:00Z
Categories: CMS, ECM

Introducing Talend API Services: Providing Best in Class Purpose-Built Applications

Talend - Thu, 11/15/2018 - 12:50

Have you heard?  Talend Fall ’18 is here and continues on Talend’s plan to meet the challenges of today’s data professionals around organizing, processing and sharing data at scale. Earlier Jean-Michel Franco wrote up about the Data Catalog portion of this exciting Fall 2018 launch. In this blog I’d like to focus on our new API features. For many organizations, APIs are no longer just technological creations of engineers to connect components of a distributed system. Today, APIs are directly driving business revenues, enabling innovation, and are the source of connectivity with partners. With our Fall ‘18 Launch, Talend Cloud will include a new API delivery platform, Talend Cloud API Services. Our delivery platform provides best in class purpose-built applications for API-first creation, testing, and documentation. Essentially, this platform enables organizations to be more agile in their API development. The platform provides productivity gains compared to hand coding through easy to use graphical design supporting both technical and less-technical personal. Additional enhancements to existing tools for API implementation and operation ensures organizations have a comprehensive approach to building user-friendly data APIs. And finally, Talend Cloud’s full support for open standards such as OAS, Swagger, and RAML makes the Talend API delivery platform complementary to existing third-party API gateways and catalogs. Allowing for easy implementation with your existing gateway or catalog. Talend Cloud API Designer With Fall ’18, Talend Cloud provides a new purpose-built application for designing API contracts visually instead of having to go the traditional route of hand coding. Developers can start from scratch or import an existing OAS / RAML definition. The interface allows developers to define API data types, resources, operations, parameters, responses, media types, and errors. Once a developer is finished defining their contract, the API designer will generate the OAS / RAML definition for you! This can be used later as part of the service(s) creation or imported into an existing API gateway / API catalog. I took a quick screenshot to show what the interface is going to look like. Now I know how much everyone likes to write documentation (or maybe not). Thankfully with the API designer, the basic documentation is auto-generated for you. Users can then host it on Talend Cloud and easily share it with end consumers in a public or private mode. Talend Cloud API Designer also provides users with the ability to extend the generated documentation through an included rich text editor. Below is an example of the documentation generated by Talend Cloud API Designer. Talend Cloud API Designer also provides Automatic API mocking that can act as a live preview for end consumers, decoupling support for consumer application development while the backend services are developed. Mocked API’s can return data specified during the contract design or automatically generate the data based on the defined data structure. This mock is kept up to date throughout the development process and enabled using the interface below. This will be a huge benefit for the consumers of my API, they won’t have to wait for me to finish building the back end before they start writing their applications. It’s pretty easy to turn on inside API designer. A single click and users are off to the races. Talend Cloud API Tester Fall ’18 also includes a new application within Talend Cloud called Talend Cloud API Tester. Though this interface, QA / DevOps teams can easily call and inspect any HTTP based API. It works with complex JSON or XML responses enabling teams to validate the API’s behavior. Calls made are stored so I can easily look back into my history for what I’ve done before. An example of the interface is shown below. My favorite feature of Talend Cloud API Tester is the ability to chain API calls together to create scenarios. These chained requests can utilize data returned from a previous call as parameters in the next call enabling teams to create real-world examples of how the API will be used. Thankfully this will keep my notepad++ tabs down to a minimum. An example of this scenario design is provided below. Throughout the testing process, I can define assertions to help validate API responses. Responses can check payloads for completeness, how timely a response was or even if a field has a specific value. Here’s an example of an assertion I made recently. The last benefit I’d like to highlight is that test cases created using API Tester can be exported for use within a continuous integration / continuous delivery pipeline ensuring further updates to the API’s maintain consistency. Talend Studio for API implementation We’ve made it simple to start working with the API’s built-in API designer. There’s a new metadata group called REST API definitions.  A couple clicks and I’ve downloaded the API and am ready to start building. We can use this contract to bootstrap a service using the contract’s defined URIs, media types, parameters, etc. This approach expedites delivery of the backend service by reducing the complexity of defining the various functions. There are also some updates to Talend Data Mapper, I can use the defined schema from the API definition as the return schema from TDM!  It is a lot easier converting data into the expected media type/structure.  Talend Cloud for API Operation      Yep, Talend cloud can now manage the services you’ve built in the studio, just like we manage data integration jobs. If this is your first time hearing about Talend Cloud, it provides a managed environment that enables developers to publish services developed in Talend studio into the Talend Management Console’s artifact repository or a third-party repository and manage the various environments the service needs to be deployed as part of the QA / DevOps process. An example of this management can be seen in the snippet As you can see there is a mountain of functionality available in the new Talend Cloud API services. If you’d like to see more of this stay tuned we have a series of videos/enablement material to get you all up to speed. I look forward to hearing about the API’s you plan to build and stay tuned to upcoming blogs from Talend if you’re looking for some inspiration. I’ll be following this blog up with a series of use cases we’ve seen and are hearing about! The post Introducing Talend API Services: Providing Best in Class Purpose-Built Applications appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

CiviTutorial - Another MIH Success!

CiviCRM - Thu, 11/15/2018 - 12:07

We came down to the wire with CiviTutorial, having less than a day to go before the Make It Happen campaign funding its development was set to expire. In the end, we had 24 awesome donors pitch in to fund the extension and make CiviTutorial a reality.

Categories: CRM

A CiviCRM Rebirth

CiviCRM - Wed, 11/14/2018 - 17:22

An interview with Restoring the Foundations Ministry

Restoring the Foundations Ministry (RTF) is an integrated approach to biblical healing, with over 200 teams around the globe providing training and personal ministry to churches and people seeking help. The mission of the organization is to offer “hope for healing, freedom from life’s deepest struggles, and renewed purpose for living.” Jaque Orsi, office administrator at RTF, recently spoke with Cividesk to share her experiences of using CiviCRM. 

Categories: CRM
Syndicate content