AdoptOS

Assistance with Open Source adoption

Open Source News

New Mixed License Pricing: Pay for what your users really use

VTiger - Wed, 11/21/2018 - 01:01
In today’s increasingly competitive business environment, staying ahead of the game is getting harder. In addition to needing to market everywhere, and provide competitive products and services, businesses also need to provide an outstanding customer experience. Part of how they do the latter is with CRM Software – which, when used right, helps businesses acquire, […]
Categories: CRM

Removing the integration headache in M&A deals

SnapLogic - Tue, 11/20/2018 - 15:28

Originally published in Finance Monthly.  The merger and acquisition market is on track to hit record levels in 2018. According to Mergermarket, the first half of the year saw 8,560 deals recorded globally at a value of $1.94tn, with 26 deals falling into the megadeals category of over $10bn per deal. The landscape is littered[...] Read the full article here.

The post Removing the integration headache in M&A deals appeared first on SnapLogic.

Categories: ETL

WordPress Admin Styles for CiviCRM with WordPress

CiviCRM - Mon, 11/19/2018 - 15:39

In the past few weeks we have been looking at how a more uniform user experience could be provided between CiviCRM and WordPress dashboards. We looked at potentially leveraging Shoreditch, but quickly realized that it's dependent on a Drupal theme and the readme clearly stated that is was only for Drupal. So we stepped back and looked at how we could do this with CSS changes that apply to the admin only, since this is not affected by the theme at all.

Categories: CRM

A Serverless Architecture for Big Data

Talend - Mon, 11/19/2018 - 10:53

This post is co-authored by Jorge Villamariona and Anselmo Barrero at Qubole.

A popular term emerging from the software industry over the last few years is serverless computing, more commonly referred to as just “serverless”. So what does it mean? In its simplest form, a serverless architecture is a computing model where a service provider dynamically manages the allocation of computing resources based on a Service Level Agreement (SLA), provisioning and running resources only for the time needed and without requiring end-user involvement. 

With a serverless architecture, the server provider would automatically increase computing capacity when demand for resources is high and would intelligently downscale when demand for resources goes down.  In this architecture, the end users only care about the tasks they want to execute (get a report, execute a query, execute a data pipeline, etc.) without the hassle of procuring, provisioning and managing the underlying infrastructure.

Traditional vs. Serverless Architectures

So, what are some major advantages for going serverless? Cost, scale and, environment options to start. Traditional architectures rely on the infrastructure administrator’s ability to estimate workloads and size hardware and software accordingly.  Moving to the cloud represents an improvement over on-premises architectures because it allows the infrastructure to scale on-demand. 

However, administrators still need to be involved to define the conditions and rules to scale and manage the cloud infrastructure. The next step forward is to leverage a serverless architecture and allow the infrastructure to automatically decide behind the scenes when to provision, scale and decommission resources as workloads change.  Qubole is a great example of a serverless architecture.

The Qubole platform automatically determines the infrastructure needed and scales it intelligently based on the workloads and SLAs.  As a result, Qubole’s serverless architecture saves customers over 50% in annual infrastructure costs compared to traditional and other managed cloud big data architectures.

This intelligent automation allows Qubole to process over an exabyte of data per month for customers deploying AI, machine learning, and analytics without requiring customers to provision and manage any infrastructure

Value of adopting a serverless architecture for Big Data

Big data deals with large volumes of data arriving at high speed which makes it difficult and inefficient to estimate the infrastructure required for processing it ahead of time.   On-premises infrastructures impose limits in processing power, are expensive, and complex to manage and maintain. Deploying Big Data in the cloud on your own or as a managed service from cloud providers (Amazon AWS, Microsoft Azure, Google Cloud, etc) improves the processing limitations and eases capex, but it creates overhead managing and attempting to optimize the infrastructure.  Improper utilization, underutilization, or overutilization on certain time periods can lead to cloud costs that are much higher than on-premises processing. This, combined with scarce skilled resources results in a very low success rate of only 15% for all big data projects according to Gartner 

To successfully leverage a serverless platform for big data you need to look for a solution that addresses the following questions:

  • Will it reduce big data infrastructure costs?
  • Does it provide automation and resources to execute data pipelines and provide analytics at any scale?
  • Will it reduce operational costs?
  • Will it help my data team scale and not be overrun by business demands for data? 

A serverless platform like Qubole is very appealing to teams deploying big data because it addresses the factors that cause big data projects to fail since it reduces infrastructure complexity and costs, as well as reliance on scarce experts.   

Qubole reduces the administration overhead by providing a simple interface to define the run-time characteristics of big data engines. Users only need to specify the minimum and maximum clusters size, whether to leverage spot instances (in the case of AWS) and the cluster composition to meet their price performance objectives. Qubole then takes over and automatically manages the infrastructure based on the business  requirements  and the workloads’ SLA without the need for further manual intervention

Qubole’s serverless architecture auto-scales to avoid latencies when dealing with large bursty incoming loads and it also down-scales to avoid idle wasted resources.  Qubole can scale from 5 nodes up to 200 nodes in less than 5 minutes. For reference, Qubole also manages the largest Spark cluster in the cloud (500+ nodes).

TCO of a Serverless Big Data Architecture 

When it comes to pricing, Qubole’s serverless architecture offers the best performance by adding computing capacity only when needed and orderly downscaling it as soon as resources become idle. 

With Qubole there are no infrastructure administration overheads or cloud resources overspent. Additionally, as we can see in the chart above, data teams leveraging Qubole don’t suffer from delays in provisioning computing resources when workloads suddenly increase.

 The combination of Talend Cloud and Qubole not only lowers infrastructure costs, but also increases the productivity of the data team, since they don’t need to worry about cluster procurement, configuration, and management. Data teams build their data pipelines in Talend Cloud and push their execution to the Qubole serverless platform, all without having to write complex code or managing infrastructure.

This partnership allows these teams to focus on building highly functioning end-to-end data pipelines, allowing data scientists to deploy faster IOT, machine learning and advanced analytics applications that have high impact on the business. With Talend and Qubole data teams build scalable serverless data pipelines, that work at low operating costs while often being engineered and maintained by single developers.  This cost reduction makes the benefits of big data more accessible to a wider audience.

To learn more about Qubole and test-drive the serverless platform visit https://www.qubole.com/lp/testdrive/

About the Authors

Jorge Villamariona works for the Product Marketing team at Qubole.  Over the years Mr. Villamariona has acquired extensive experience in relational databases, business intelligence, big data engines, ETL,  and CRM systems. Mr. Villamariona enjoys complex data challenges and helping customers gain greater insight and value from their existing data.

    Anselmo Barrero is a Director of Business Development at Qubole with more than 25 years of experience in IT and three patents granted. Mr. Barrero is passionate about building products and strategic partnerships to address market opportunities. He has created products that yield more than 50% YoY growth and established strategic partnerships in areas such as Data Warehouse that resulted in more than 100% consecutive YoY growth.   In his current role Mr. Barrero is responsible for establishing strategic partnerships in big data and the cloud to allow customers reduce the cost and time of getting value out of their data

The post A Serverless Architecture for Big Data appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Getting Started with Talend Open Studio: Building Your First Job

Talend - Sun, 11/18/2018 - 11:02

In the previous blog, we walked through the installation and set-up of Talend Open Studio and briefly demonstrated key features to familiarize you with the Studio interface. In this blog, we will build a simple job to load data from a local file into Snowflake, a cloud data warehouse technology. More specifically, we will build a new job that takes customer data from your local machine and maps it to a target table within Snowflake.

To follow along in this tutorial, you will need Talend Open Studio for Data Integration (download here), some customer data (either use your own customer data or generate some dummy data), and a Snowflake data warehouse with a database already created. If you don’t have access to a Snowflake data warehouse, you can use another relational database technology.

To see a video of this tutorial, please feel free to see our step-by-step webinar—just skip to the third video.

To get started, right-click within the Job Designs folder in the repository and create a new folder titled “Data Integration” to house your job. Next, dive into that new folder and choose “create a job” and name your job Customer_Load. As a best practice, enter the job’s intended purpose and a general description of its overall function. Once you click finish, the new job will be available within your new folder.

Bringing Data from a CSV into a Talend Job

Before building out your flow, bring your customer data into the Repository. To do this, create a new file delimited element within your Metadata folder by right-clicking on the File Delimited button and choosing “Create File Delimited.” Then, name the file “Customer” and click Next.

From there, browse to locate your customer data file. Once selected, the data is visible within the File Viewer. To define the settings and your data elements, click “Next”. In the next window, choose to use a comma as a field separator. Because we selected a CSV file, set the Escape Char setting to CSV and the Text Enclosure to be double quotes. Make sure to check “Set heading row as column names” before proceeding.

Now the customer data is imported and organized. One final time, click “Next” to confirm your data schema. Talend Open Studio will guess each column’s type based on each column’s contents—be sure to double check that everything is correct.

After checking this data set, you can see that Talend Open Studio guessed the “phone2” column was a date, which is incorrect, so instead, change it to String and then click Finish.

Next, you can drag your Customers delimited file onto the Design window as a tFileInputDelimited component. This brings your customer data into the Talend job.

Creating Your Snowflake Connection

Next, you need to create a new connection to your existing Snowflake table. First, find the Snowflake heading in the Repository and right-click to “Create a new connection”. Give your new connection a name, and then enter your account name (so if your Snowflake URL is talend.snowflake.com, your account name would be talend), User ID, and password. Also identify the Snowflake warehouse, schema and database you will be moving your data to.

After you input all of the necessary information, test the connection and make sure it is successful. Following a successful connection to Snowflake, select from the listed tables those you want to be added to the Talend Repository for this connection then click finish. This will import the schema of those tables from Snowflake into the Repository. From there, you can now choose your table of interest from the repository (in this case, Customers) and drag and drop it into the Design window as a tSnowflakeOutput component. As a side note, we have chosen to use an existing table in this tutorial; however, you can also use Talend to create a table in an existing database.

To map the source data (customer file) to the target table (Snowflake), add a tMap component by clicking within the Design window and searching for “tMap”. The tMap component is a very robust component that can be used for a wide range of functions, but for now, we will be using it to simply link the fields between two tables (to learn more about tMap, stay tuned for the next blog in the series). To start using the tMap, connect the CSV file to tMap by dragging the orange arrow from the file delimited component to the tMap.

Next, to connect the tMap to your Snowflake output, right click on tMap and select Row, and click *New Output* to create a new output connection and give it a name like “Customers”. Then, select “Yes” when asked whether you would like to get the schema of the target component.

Your Design window should now look like this: 

To configure the tMap component, double-click on the component itself within the Design Window to reveal the input and output tables. Here you must link both table columns together. You can either drag and drop to link each corresponding field individually, or select Automap, which works great in this case to link the fields between the two tables. Make any adjustments necessary. Once you have ensured the types have been properly auto-selected, click Ok to save this configuration.

If you haven’t installed the additional licenses yet, this Snowflake output component will flash an error. If that’s the case, simply select to install the additional packages which are located within the Help drop-down.

You’re now ready to run the job and populate the data tables within Snowflake. Within the Run tab, simply click Run. You can watch the process run from start to finish within Studio, pushing 500 rows out to Snowflake.

Once the run has been completed successfully, you can head to your Snowflake account. In this example, you can see that 500 records were successfully processed through Talend Studio and loaded into your Snowflake Cloud Data Warehouse.

And that’s how to build your first job within Talend Open Studio. In our next blog, we will go through some more complex functionalities of tMap, and we will also give a few tips on running and debugging your Talend jobs. Please leave a comment and let us know if there are any other things that would help you get started on Talend Open Studio.

The post Getting Started with Talend Open Studio: Building Your First Job appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Vtiger’s Most Awaited Release: Sales Enterprise Edition

VTiger - Sat, 11/17/2018 - 23:50
Each year a multitude of organizations spend huge sums of money trying to find better ways to grow sales. This in itself is a testimony as to how important sales can be to organizations. From a customer’s standpoint, your sales team is their first point of contact with your organization. The extent of brand loyalty […]
Categories: CRM

Changing OSGi References

Liferay - Fri, 11/16/2018 - 12:53

So we've all seen those @Reference annotations scattered throughout the Liferay code, and it can almost seem like those references are not changeable.

In fact, this is not really true at all.

The OSGi Configuration Admin service can be used to change the reference binding without touching the code.

Let's take a look at a contrived example from Liferay's com.liferay.blogs.demo.internal.BlogsDemo class.

This class has a number of @Reference injections for different types of demo content generators. One of those is declared as:

@Reference(target = "(source=lorem-ipsum)", unbind = "-") protected void setLoremIpsumBlogsEntryDemoDataCreator( BlogsEntryDemoDataCreator blogsEntryDemoDataCreator) { _blogsEntryDemoDataCreators.add(blogsEntryDemoDataCreator); }

So in this example, the com.liferay.blogs.demo.data.creator.internal.LoremIpsumBlogsEntryDemoDataCreatorImpl is registered with a property, "source=lorem-ipsum", and it can generate content for a blogs entry demo.

Let's say that we have our own demo data creator, com.example.KlingonBlogsEntryDemoDataCreatorImpl that generates blog entries in Klingon (it has "source=klingon" defined for its property), and we want the blogs demo class to not use the lorem-ipsum version, but instead use our klingon variety.

How can we do this?

Well, BlogsDemo is a component, so we could create a copy of it and change the relevant code to @Reference ours, but this seems kind of like overkill.

A much easier way would be to get OSGi to just bind to our instance rather than the original. This is actually quite easy to do.

First we will need to create a configuration admin override file in osgi/config named after the full class name but with a .config extension.

So we need to create an osgi/config/com.liferay.blogs.demo.internal.BlogsDemo.config file. This file will have our override for the reference to bind to, but we need to get some more details for that.

We need to know the name for the field that we're going to be setting, that will be part of the configuration change. This will actually come from what the @Reference decorates. If @Reference is on a field, the field name will be the name you need; if it is on a setter, the name will be the setter method name without the leading "set" prefix.

So, from above, since we have setLoremIpsumBlogsEntryDemoDataCreator(), our field name will be "LoremIpsumBlogsEntryDemoDataCreator".

To change the target, we'll need to add a line to our config file with the following:

LoremIpsumBlogsEntryDemoDataCreator.target="(source\=klingon)"

This will effectively change the target string from the old (source=lorem-ipsum) to the new (source=klingon).

So this is how we can basically change up the wiring w/o really overriding a line of code.

You can even take this further. With a simple @Reference annotation w/o a target filter, you can add a target filter to bind a different reference. This could be an alternative to relying on a higher service ranking for binding.

For those cases where a service tracker is being used to track a list of entities, you can use this technique to exclude one or more references that you don't want to have the service tracker capture.

So actually I didn't come up with all of this myself.  It's actually an adaptation of https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-service-references#configure-the-component-to-use-the-custom-service to demonstrate just how that can be used to change the wiring.

David H Nebinger 2018-11-16T17:53:00Z
Categories: CMS, ECM

Accessing Services in JSPs

Liferay - Fri, 11/16/2018 - 10:21
Introduction

When developing JSP-based portlets for OSGi deployment, and even when doing JSP fragment bundle overrides, it is often necessary to get service references in the JSP pages. But OSGi @Reference won't work in the JSP files, so we need ways to expose the services so they can be accessed in the JSPs...

Retrieving Services in the JSP

So we're going to work this a little backwards, we're going to cover how to get the service reference in the JSP itself.

In order to get the references, we're going to use a scriptlet to pull the reference from the request attributes, similar to :

<% TrashHelper trashHelper = (TrashHelper) request.getAttribute(TrashHelper.class.getName()); %>

The idea is that we will be pulling the reference directly out of the request attributes. We need to cast the object coming from the attributes to the right type, and we'll be following the Liferay standard of using the full class name as the attribute key.

The challenge is how to set the attribute into the request.

Setting Services in a Portlet You Control

So when you control the portlet code, injecting the service reference is pretty easy.

In your portlet class, you're going to add your @Reference for the service you need to pass. Your portlet class would include something along the lines of:

@Reference(unbind = "-") protected void setTrashHelper(TrashHelper trashHelper) { this._trashHelper = trashHelper; } private TrashHelper _trashHelper;

With the reference available, you'll then override the render() method to set the attribute:

@Override public void render(RenderRequest renderRequest, RenderResponse renderResponse) throws IOException, PortletException { renderRequest.setAttribute(TrashHelper.class.getName(), _trashHelper); super.render(renderRequest, renderResponse); }

So this sets the service as an attribute in the render request. On the JSP side, it would be able to get the service via the code shared above.

Setting Services in a Portlet You Do Not Control

So you may need to build a JSP fragment bundle to override JSP code, and in your override you need to add a service which was not injected by the core portlet.  It would be kind of overkill to override the portlet just to inject missing services.

So how can you inject the services you need? A portlet filter implementation!

Portlet filters are similar to the old servlet filters, they are used to wrap the invocation of an underlying portlet. And, like servlet filters, can make adjustments to requests/responses on the way into the portlet as well as on the way out.

So we can build a portlet filter component and inject our service reference that way...

@Component( immediate = true, property = "javax.portlet.name=com_liferay_dictionary_web_portlet_DictionaryPortlet", service = PortletFilter.class ) public class TrashHelperPortletFilter implements RenderFilter { @Override public void doFilter(RenderRequest renderRequest, RenderResponse renderResponse, FilterChain filterChain) throws IOException, PortletException { filterChain.doFilter(renderRequest, renderResponse); renderRequest.setAttribute(TrashHelper.class.getName(), _trashHelper); } @Reference(unbind = "-") protected void setTrashHelper(TrashHelper trashHelper) { this._trashHelper = trashHelper; } private TrashHelper _trashHelper; }

So this portlet filter is configured to bind to the Dictionary portlet. It will be invoked at each portlet render since it implements a RenderFilter. The implementation calls through to the filter chain to invoke the portlet, but on the way out it adds the helper service to the request attributes.

Conclusion

So we've seen how we can use OSGi services in the JSP files indirectly via request attribute injection. In portlets we control, we can inject the service directly. For portlets we do not control, we can use a portlet filter to inject the service too.

David H Nebinger 2018-11-16T15:21:00Z
Categories: CMS, ECM

Pro Liferay Deployment

Liferay - Fri, 11/16/2018 - 00:15
Introduction

The official Liferay deployment docs are available here: https://dev.liferay.com/discover/deployment

They make it easy for folks new to Liferay to get the system up and running and work through all of the necessary configuration.

But it is not the process followed by professionals. I wanted to share the process I use that it might provide an alternative set of instructions that you can use to build out your own production deployment process.

The Bundle

Like the Liferay docs, you may want to start from a bundle; always start from the latest bundle you can. It is, for the most part, a working system that may be ready to go. I say for the most part because many of the bundles are older versions of the application servers. This may or may not be a concern for your organization, so consider whether you need to update the application server.

You'll want to explode the bundle so all of the files are ready to go.

If you are using DXP, you'll want to download and apply the latest fixpack. Doing this before the first start will ensure that you won't need to deal with an upgrade later on.

The Database

You will, of course, need a database for Liferay to connect to and set up. I prefer to create the initial database using database specific tools. One key aspect to keep in mind is that the database must be set up for UTF-8 support as Liferay will be storing UTF-8 content.

Here's examples for what I use for MySQL/MariaDB:

create database lportal character set utf8; grant all privileges on lportal.* to 'MyUser’@‘192.168.1.5' identified by 'myS3cr3tP4sswd'; flush privileges;

Here's the example I use for Postgres:

create role MyUser with login password 'myS3cr3tP4sswd'; alter role MyUser createdb; alter role MyUser superuser; create database lportal with owner 'MyUser' encoding 'UTF8' LC_COLLATE='en_US.UTF-8' LC_CTYPE='en_US.UTF-8' template template0; grant all privileges on database lportal to MyUser;

There's other examples available for other databases, but hopefully you get the gist.

From an enterprise perspective, you'll have things to consider such as a backup strategy, possibly a replication strategy, a cluster strategy, ... These things will obviously depend upon enterprise needs and requirements and are beyond the scope of this blog post.

Along with the database, you'll need to connect the appserver to the database. I always want to go for the JNDI database configuration rather than sticking the values in the portal-ext.properties. The passwords are much more secure in the JNDI database configuration.

For tomcat, this means going into the conf/Catalina/localhost directory and editing the ROOT.xml file as such:

<Resource name="jdbc/LiferayPool" auth="Container" type="javax.sql.DataSource" factory="com.zaxxer.hikari.HikariJNDIFactory" minimumIdle="5" maximumPoolSize="10" connectionTimeout="300000" dataSource.user="MyUser" dataSource.password="myS3cr3tP4sswd" driverClassName="org.mariadb.jdbc.Driver" dataSource.implicitCachingEnabled="true" jdbcUrl="jdbc:mariadb://dbserver/lportal?characterEncoding=UTF-8&dontTrackOpenResources=true&holdResultsOpenOverStatementClose=true&useFastDateParsing=false&useUnicode=true" /> Elasticsearch

Elasticsearch is also necessary, so the next step is to stand up your ES solution. Could be one node or a cluster. Get your ES system set up and collect your IP address(es). Verify that firewall rules allow for connectivity from the appserver(s) to the ES node(s).

With the ES servers, create an ES configuration file, com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config in the osgi/config directory and set the contents:

operationMode="REMOTE" clientTransportIgnoreClusterName="false" indexNamePrefix="liferay-" httpCORSConfigurations="" additionalConfigurations="" httpCORSAllowOrigin="/https?://localhost(:[0-9]+)?/" networkBindHost="" transportTcpPort="" bootstrapMlockAll="false" networkPublishHost="" clientTransportSniff="true" additionalIndexConfigurations="" retryOnConflict="5" httpCORSEnabled="true" clientTransportNodesSamplerInterval="5s" additionalTypeMappings="" logExceptionsOnly="true" httpEnabled="true" networkHost="[_eth0_,_local_]" transportAddresses=["lres01:9300","lres02:9300"] clusterName="liferay" discoveryZenPingUnicastHostsPort="9300-9400"

Obviously you'll need to edit the contents to use local IP address(es) and/or name(s). This can and should all be set up before the Liferay first start.

Portal-ext.properties

Next is the portal-ext.properties file. Below is the one that I typically start with as it fits most of the use cases for the portal that I've used. All properties are documented here: https://docs.liferay.com/ce/portal/7.0/propertiesdoc/portal.properties.html

company.default.web.id=example.com company.default.home.url=/web/example default.logout.page.path=/web/example default.landing.page.path=/web/example admin.email.from.name=Example Admin admin.email.from.address=admin@example.com users.reminder.queries.enabled=false session.timeout=5 session.timeout.warning=0 session.timeout.auto.extend=true session.tracker.memory.enabled=false permissions.inline.sql.check.enabled=true layout.user.private.layouts.enabled=false layout.user.private.layouts.auto.create=false layout.user.public.layouts.enabled=false layout.user.public.layouts.auto.create=false layout.show.portlet.access.denied=false redirect.url.security.mode=domain browser.launcher.url= index.search.limit=2000 index.filter.search.limit=2000 index.on.upgrade=false setup.wizard.enabled=false setup.wizard.add.sample.data=off counter.increment=2000 counter.increment.com.liferay.portal.model.Layout=10 direct.servlet.context.reload=false search.container.page.delta.values=20,30,50,75,100,200 com.liferay.portal.servlet.filters.gzip.GZipFilter=false com.liferay.portal.servlet.filters.monitoring.MonitoringFilter=false com.liferay.portal.servlet.filters.sso.ntlm.NtlmFilter=false com.liferay.portal.servlet.filters.sso.opensso.OpenSSOFilter=false com.liferay.portal.sharepoint.SharepointFilter=false com.liferay.portal.servlet.filters.validhtml.ValidHtmlFilter=false blogs.pingback.enabled=false blogs.trackback.enabled=false blogs.ping.google.enabled=false dl.file.rank.check.interval=-1 dl.file.rank.enabled=false message.boards.pingback.enabled=false company.security.send.password=false company.security.send.password.reset.link=false company.security.strangers=false company.security.strangers.with.mx=false company.security.strangers.verify=false #company.security.auth.type=emailAddress company.security.auth.type=screenName #company.security.auth.type=userId field.enable.com.liferay.portal.kernel.model.Contact.male=false field.enable.com.liferay.portal.kernel.model.Contact.birthday=false terms.of.use.required=false # ImageMagick imagemagick.enabled=false #imagemagick.global.search.path[apple]=/opt/local/bin:/opt/local/share/ghostscript/fonts:/opt/local/share/fonts/urw-fonts imagemagick.global.search.path[unix]=/usr/bin:/usr/share/ghostscript/fonts:/usr/share/fonts/urw-fonts #imagemagick.global.search.path[windows]=C:\\Program Files\\gs\\bin;C:\\Program Files\\ImageMagick # OpenOffice # soffice -headless -accept="socket,host=127.0.0.1,port=8100;urp;" openoffice.server.enabled=true # xuggler xuggler.enabled=true #hibernate.jdbc.batch_size=0 hibernate.jdbc.batch_size=200 cluster.link.enabled=true ehcache.cluster.link.replication.enabled=true cluster.link.channel.properties.control=tcpping.xml cluster.link.channel.properties.transport.0=tcpping.xml cluster.link.autodetect.address=dbserver company.security.auth.requires.https=true main.servlet.https.required=true atom.servlet.https.required=true axis.servlet.https.required=true json.servlet.https.required=true jsonws.servlet.https.required=true spring.remoting.servlet.https.required=true tunnel.servlet.https.required=true webdav.servlet.https.required=true rss.feeds.https.required=true dl.store.impl=com.liferay.portal.store.file.system.AdvancedFileSystemStore

Okay, so first of all, don't just copy this into your portal-ext.properties file as-is. You'll need to edit it for names, sites, addresses, etc. It also enables clusterlink and sets up use of https as well as the advanced filesystem store.

I tend to use TCPPING for my ClusterLink configuration as unicast doesn't have some of the connectivity issues. I use a standard configuration (seen below), and use the tomcat setenv.sh file to specify the initial hosts.

<!-- TCP based stack, with flow control and message bundling. This is usually used when IP multicasting cannot be used in a network, e.g. because it is disabled (routers discard multicast). Note that TCP.bind_addr and TCPPING.initial_hosts should be set, possibly via system properties, e.g. -Djgroups.bind_addr=192.168.5.2 and -Djgroups.tcpping.initial_hosts=192.168.5.2[7800] author: Bela Ban --> <config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd"> <TCP bind_port="7800" recv_buf_size="${tcp.recv_buf_size:5M}" send_buf_size="${tcp.send_buf_size:5M}" max_bundle_size="64K" max_bundle_timeout="30" use_send_queues="true" sock_conn_timeout="300" timer_type="new3" timer.min_threads="4" timer.max_threads="10" timer.keep_alive_time="3000" timer.queue_max_size="500" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="8" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="true" thread_pool.queue_max_size="10000" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="discard"/> <TCPPING async_discovery="true" initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}" port_range="2"/> <MERGE3 min_interval="10000" max_interval="30000"/> <FD_SOCK/> <FD timeout="3000" max_tries="3" /> <VERIFY_SUSPECT timeout="1500" /> <BARRIER /> <pbcast.NAKACK2 use_mcast_xmit="false" discard_delivered_msgs="true"/> <UNICAST3 /> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="4M"/> <pbcast.GMS print_local_addr="true" join_timeout="2000" view_bundling="true"/> <MFC max_credits="2M" min_threshold="0.4"/> <FRAG2 frag_size="60K" /> <!--RSVP resend_interval="2000" timeout="10000"/--> <pbcast.STATE_TRANSFER/> </config>

Additionally, since I want to use the Advanced Filesystem Store, I need a osgi/config/com.liferay.portal.store.file.system.configuration.AdvancedFileSystemStoreConfiguration.cfg file with the following contents:

## ## To apply the configuration, place this file in the Liferay installation's osgi/modules folder. Make sure it is named ## com.liferay.portal.store.file.system.configuration.AdvancedFileSystemStoreConfiguration.cfg. ## rootDir=/liferay/document_library JVM & App Server Config

So of course I use the Deployment Checklist to configure JVM, GC and memory configuration. I do prefer to use at least an 8G memory partition. Also I add the JGroups initial hosts.

Conclusion

Conclusion? But we haven't really started up the portal yet, how can this be the conclusion?

And that is really the point. All configuration is done before the portal is launched. Any other settings that could be changed in the System Settings control panel, well those I would also create the osgi/config file(s) to hold the settings.

The more that is done in configuration pre-launch, the less likelihood there is of getting unnecessary data loaded, user public/private layouts that might not be needed, proper filesystem store out of the gate, ...

It really is how the pros get their Liferay environments up and running...

David H Nebinger 2018-11-16T05:15:00Z
Categories: CMS, ECM

Beachbody Gets Data Management in Shape with Talend Solutions

Talend - Thu, 11/15/2018 - 21:20

This post is co-authored by Hari Umapathy, Lead Data Engineer at Beachbody and Aarthi Sridharan, Sr.Director of Data (Enterprise Technology) at Beachbody.

Beachbody is a leading provider of fitness, nutrition, and weight-loss programs that deliver results for our more than 23 million customers. Our 350,000 independent “coach” distributors help people reach their health and financial goals.

The company was founded in 1998, and has more than 800 employees. Digital business and the management of data is a vital part of our success. We average more than 5 million monthly unique visits across our digital platforms, which generates an enormous amount of data that we can leverage to enhance our services, provide greater customer satisfaction, and create new business opportunities.

Building a Big Data Lake

One of our most important decisions with regard to data management was deploying Talend’s Real Time Big Data platform about two years ago. We wanted to build a new data environment, including a cloud-based data lake, that could help us manage the fast-growing volumes of data and the growing number of data sources. We also wanted to glean more and better business insights from all the data we are gathering, and respond more quickly to changes.

We are planning to gradually add at least 40 new data sources, including our own in-house databases as well as external sources such as Google Adwords, Doubleclick, Facebook, and a number of other social media sites.

We have a process in which we ingest data from the various sources, store the data that we ingested into the data lake, process the data and then build the reporting and the visualization layer on top of it. The process is enabled in part by Talend’s ETL (Extract, Transform, Load) solution, which can gather data from an unlimited number of sources, organize the data, and centralize it into a single repository such as a data lake.

We already had a traditional, on-premise data warehouse, which we still use, but we were looking for a new platform that could work well with both cloud and big data-related components, and could enable us to bring on the new data sources without as much need for additional development efforts.

The Talend solution enables us to execute new jobs again and again when we add new data sources to ingest in the data lake, without having to code each time. We now have a practice of reusing the existing job via a template, then just bringing in a different set of parameters. That saves us time and money, and allows us to shorten the turnaround time for any new data acquisitions that we had to do as an organization.

The Results of Digital Transformation

For example, whenever a business analytics team or other group comes to us with a request for a new job, we can usually complete it over a two-week sprint. The data will be there for them to write any kind of analytics queries on top of it. That’s a great benefit.

The new data sources we are acquiring allow us to bring all kinds of data into the data lake. For example, we’re adding information such as reports related to the advertisements that we place on Google sites, the user interaction that has taken place on those sites, and the revenue we were able to generate based on those advertisements.

We are also gathering clickstream data from our on-demand streaming platform, and all the activities and transactions related to that. And we are ingesting data from the Salesforce.com marketing cloud, which has all the information related to the email marketing that we do. For instance, there’s data about whether people opened the email, whether they responded to the email and how.

Currently, we have about 60 terabytes of data in the data lake, and as we continue to add data sources we anticipate that the volume will at least double in size within the next year.

Getting Data Management in Shape for GDPR

One of the best use cases we’ve had that’s enabled by the Talend solution relates to our efforts to comply with the General Data Protection Regulation (GDPR). The regulation, a set of rules created by the European Parliament, European Council, and European Commission that took effect in May 2018, is designed to bolster data protection and privacy for individuals within the European Union (EU).

We leverage the data lake whenever we need to quickly access customer data that falls under the domain of GDPR. So when a customer asks us for data specific to that customer we have our team create the files from the data lake.

The entire process is simple, making it much easier to comply with such requests. Without a data lake that provides a single, centralized source of information, we would have to go to individual departments within the company to gather customer information. That’s far more complex and time-consuming.

When we built the data lake it was principally for the analytics team. But when different data projects such as this arise we can now leverage the data lake for those purposes, while still benefiting from the analytics use cases.

Looking to the Future

Our next effort, which will likely take place in 2019, will be to consolidate various data stores within the organization with our data lake. Right now different departments have their own data stores, which are siloed. Having this consolidation, which we will achieve using the Talend solutions and the automation these tools provide, will give us an even more convenient way to access data and run business analytics on the data.

We are also planning to leverage the Talend platform to increase data quality. Now that we’re increasing our data sources and getting much more into data analytics and data science, quality becomes an increasingly important consideration. Members of our organization will be able to use the data quality side of the solution in the upcoming months.

Beachbody has always been an innovative company when it comes to gleaning value from our data. But with the Talend technology we can now take data management to the next level. A variety of processes and functions within the company will see use cases and benefits from this, including sales and marketing, customer service, and others.

About the Authors: 

Hari Umapathy

Hari Umapathy is a Lead Data Engineer at Beachbody working on architecting, designing and developing their Data Lake using AWS, Talend, Hadoop and Redshift.  Hari is a Cloudera Certified Developer for Apache Hadoop.  Previously, he worked at Infosys Limited as a Technical Project Lead managing applications and databases for a huge automotive manufacturer in the United States.  Hari holds a bachelor’s degree in Information Technology from Vellore Institute of Technology, Vellore, India.

 

Aarthi Sridharan

Aarthi Sridharan is the Sr.Director of Data (Enterprise Technology) at Beachbody LLC,  a health and fitness company in Santa Monica. Aarthi’s leadership drives the organization’s abilities to make data driven decisions for accelerated growth and operational excellence. Aarthi and her team are responsible for ingesting and transforming large volumes of data into traditional enterprise data warehouse and into the data lake and building analytics on top it. 

The post Beachbody Gets Data Management in Shape with Talend Solutions appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Boosting Search

Liferay - Thu, 11/15/2018 - 17:43
Introduction

A client recently was moving off of Google Search Appliance (GSA) on to Liferay and Elasticsearch. One key aspect of GSA that they relied on though, was KeyMatch.

What is KeyMatch? Well, in GSA an administrator can define a list of specific keywords and assign content to them. When a user performs a search that includes one of the specific keywords, the associated content is boosted to the top of the search results.

This way an admin can ensure that a specific piece of content can be promoted as a top result.

For example, you run a bakery. During holidays, you have specially decorated cakes and cupcakes. You might do a KeyMatch search for "cupcake" to your specialty cupcakes so when a user searches, they get the specialty cakes over your normal cupcakes.

Elasticsearch Tuning

So Elasticsearch, the heart of the Liferay search facilities, does not have KeyMatch support. In fact, often it may seem that there is little search result tuning capabilities at all. In fact, this is not the case.

There are tuning opportunities for Elasticsearch, but it does take some effort to get the outcomes you're hoping for.

Tag Boosting

So one way to get a result similar to KeyMatch would be to boost the match for tags.

In our bakery example above, all of our contents related to cupcakes will, of course, appear as search results for "cupcake" if only because the keyword is part of our content. Tagging content with "cupcake" would also get it to come up as a search result, but may not make it score high enough to make them stand out as results.

We could, however, use tag boosting so that a keyword match on a tag would push a match to the top of the search results.

So how do you implement a tag boost? Through a custom IndexPostProcessor implementation.

Here's one that I whipped up that will boost tag matches by 100.0:

@Component( immediate = true, property = { "indexer.class.name=com.liferay.journal.model.JournalArticle", "indexer.class.name=com.liferay.document.library.kernel.model.DLFileEntry" }, service = IndexerPostProcessor.class ) public class TagBoostIndexerPostProcessor extends BaseIndexerPostProcessor implements   IndexerPostProcessor { @Override public void postProcessFullQuery(BooleanQuery fullQuery, SearchContext searchContext)   throws Exception { List<BooleanClause<Query>> clauses = fullQuery.clauses(); if ((clauses == null) || (clauses.isEmpty())) { return; } Query query; BooleanQueryImpl queryImpl; for (BooleanClause<Query> clause : clauses) { query = clause.getClause(); updateBoost(query); } } protected void updateBoost(final Query query) { if (query instanceof BooleanClauseImpl) { BooleanClauseImpl<Query> booleanClause = (BooleanClauseImpl<Query>) query; updateBoost(booleanClause.getClause()); } else if (query instanceof BooleanQueryImpl) { BooleanQueryImpl booleanQuery = (BooleanQueryImpl) query; for (BooleanClause<Query> clause : booleanQuery.clauses()) { updateBoost(clause.getClause()); } } else if (query instanceof WildcardQueryImpl) { WildcardQueryImpl wildcardQuery = (WildcardQueryImpl) query; if (wildcardQuery.getQueryTerm().getField().startsWith(Field.ASSET_TAG_NAMES)) { query.setBoost(100.0f); } } else if (query instanceof MatchQuery) { MatchQuery matchQuery = (MatchQuery) query; if (matchQuery.getField().startsWith(Field.ASSET_TAG_NAMES)) { query.setBoost(100.0f); } } } }

So this is an IndexPostProcessor implementation that is bound to all JournalArticles and DLFileEntries. When a search is performed, the postProcessFullQuery() method will be invoked with the full query to be processed and the search context. The above code will be used to identify all tag matches and will increase the boost for them.

This implementation uses recursion because the passed in query is actually a tree; processing via recursion is an easy way to visit each node in the tree looking for matches on tag names.

When a match is found, the boost on the query is set to 100.0.

Using this implementation, if a single article is tagged with "cupcake", a search for "cupcake" will cause those articles with the tag to jump to the top of the search results.

Other Modification Ideas

This is an example of how you can modify the search before it is handed off to Elasticsearch for processing.

It can be used to remove query items, change query items, add query items, etc.

It can also be used to adjust the query filters to exclude items from search results.

Conclusion

So the internals of the postProcessFullQuery() method and arguments are not really documented, at least not anywhere in detail that I could find for adjusting the query results.

Rather than reading through the code for how the query is built, when I was creating this override, I actually used a debugger to check the nodes of the tree to determine types, fields, etc.

I hope this will give you some ideas about how you too might adjust your search queries in ways to manipulate search results to get the ordering you're looking for.

David H Nebinger 2018-11-15T22:43:00Z
Categories: CMS, ECM

Introducing Talend API Services: Providing Best in Class Purpose-Built Applications

Talend - Thu, 11/15/2018 - 12:50

Have you heard?  Talend Fall ’18 is here and continues on Talend’s plan to meet the challenges of today’s data professionals around organizing, processing and sharing data at scale. Earlier Jean-Michel Franco wrote up about the Data Catalog portion of this exciting Fall 2018 launch. In this blog I’d like to focus on our new API features.

For many organizations, APIs are no longer just technological creations of engineers to connect components of a distributed system. Today, APIs are directly driving business revenues, enabling innovation, and are the source of connectivity with partners.

With our Fall ‘18 Launch, Talend Cloud will include a new API delivery platform, Talend Cloud API Services. Our delivery platform provides best in class purpose-built applications for API-first creation, testing, and documentation. Essentially, this platform enables organizations to be more agile in their API development. The platform provides productivity gains compared to hand coding through easy to use graphical design supporting both technical and less-technical personal.

Additional enhancements to existing tools for API implementation and operation ensures organizations have a comprehensive approach to building user-friendly data APIs. And finally, Talend Cloud’s full support for open standards such as OAS, Swagger, and RAML makes the Talend API delivery platform complementary to existing third-party API gateways and catalogs. Allowing for easy implementation with your existing gateway or catalog.

Talend Cloud API Designer

With Fall ’18, Talend Cloud provides a new purpose-built application for designing API contracts visually instead of having to go the traditional route of hand coding. Developers can start from scratch or import an existing OAS / RAML definition. The interface allows developers to define API data types, resources, operations, parameters, responses, media types, and errors.

Once a developer is finished defining their contract, the API designer will generate the OAS / RAML definition for you! This can be used later as part of the service(s) creation or imported into an existing API gateway / API catalog. I took a quick screenshot to show what the interface is going to look like.

Now I know how much everyone likes to write documentation (or maybe not). Thankfully with the API designer, the basic documentation is auto-generated for you. Users can then host it on Talend Cloud and easily share it with end consumers in a public or private mode. Talend Cloud API Designer also provides users with the ability to extend the generated documentation through an included rich text editor. Below is an example of the documentation generated by Talend Cloud API Designer.

Talend Cloud API Designer also provides Automatic API mocking that can act as a live preview for end consumers, decoupling support for consumer application development while the backend services are developed. Mocked API’s can return data specified during the contract design or automatically generate the data based on the defined data structure.

This mock is kept up to date throughout the development process and enabled using the interface below. This will be a huge benefit for the consumers of my API, they won’t have to wait for me to finish building the back end before they start writing their applications. It’s pretty easy to turn on inside API designer. A single click and users are off to the races.

Talend Cloud API Tester

Fall ’18 also includes a new application within Talend Cloud called Talend Cloud API Tester. Though this interface, QA / DevOps teams can easily call and inspect any HTTP based API. It works with complex JSON or XML responses enabling teams to validate the API’s behavior. Calls made are stored so I can easily look back into my history for what I’ve done before. An example of the interface is shown below.

My favorite feature of Talend Cloud API Tester is the ability to chain API calls together to create scenarios. These chained requests can utilize data returned from a previous call as parameters in the next call enabling teams to create real-world examples of how the API will be used. Thankfully this will keep my notepad++ tabs down to a minimum. An example of this scenario design is provided below.

Throughout the testing process, I can define assertions to help validate API responses. Responses can check payloads for completeness, how timely a response was or even if a field has a specific value. Here’s an example of an assertion I made recently.

The last benefit I’d like to highlight is that test cases created using API Tester can be exported for use within a continuous integration / continuous delivery pipeline ensuring further updates to the API’s maintain consistency.

Talend Studio for API implementation

We’ve made it simple to start working with the API’s built-in API designer. There’s a new metadata group called REST API definitions.  A couple clicks and I’ve downloaded the API and am ready to start building.

We can use this contract to bootstrap a service using the contract’s defined URIs, media types, parameters, etc. This approach expedites delivery of the backend service by reducing the complexity of defining the various functions.

There are also some updates to Talend Data Mapper, I can use the defined schema from the API definition as the return schema from TDM!  It is a lot easier converting data into the expected media type/structure. 

Talend Cloud for API Operation     

Yep, Talend cloud can now manage the services you’ve built in the studio, just like we manage data integration jobs. If this is your first time hearing about Talend Cloud, it provides a managed environment that enables developers to publish services developed in Talend studio into the Talend Management Console’s artifact repository or a third-party repository and manage the various environments the service needs to be deployed as part of the QA / DevOps process. An example of this management can be seen in the snippet

As you can see there is a mountain of functionality available in the new Talend Cloud API services. If you’d like to see more of this stay tuned we have a series of videos/enablement material to get you all up to speed.

I look forward to hearing about the API’s you plan to build and stay tuned to upcoming blogs from Talend if you’re looking for some inspiration. I’ll be following this blog up with a series of use cases we’ve seen and are hearing about!

The post Introducing Talend API Services: Providing Best in Class Purpose-Built Applications appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

CiviTutorial - Another MIH Success!

CiviCRM - Thu, 11/15/2018 - 12:07

We came down to the wire with CiviTutorial, having less than a day to go before the Make It Happen campaign funding its development was set to expire. In the end, we had 24 awesome donors pitch in to fund the extension and make CiviTutorial a reality.

Categories: CRM

A CiviCRM Rebirth

CiviCRM - Wed, 11/14/2018 - 17:22

An interview with Restoring the Foundations Ministry

Restoring the Foundations Ministry (RTF) is an integrated approach to biblical healing, with over 200 teams around the globe providing training and personal ministry to churches and people seeking help. The mission of the organization is to offer “hope for healing, freedom from life’s deepest struggles, and renewed purpose for living.” Jaque Orsi, office administrator at RTF, recently spoke with Cividesk to share her experiences of using CiviCRM. 

Categories: CRM

SnapLogic Data Science brings self-service to machine learning

SnapLogic - Wed, 11/14/2018 - 07:30

Today, we announced the launch of SnapLogic Data Science, a visual self-service solution for the entire machine learning (ML) lifecycle. SnapLogic Data Science, together with SnapLogic’s award-winning integration platform, the Enterprise Integration Cloud, supports data sourcing, data preparation and feature engineering, and the training, validation, and deployment of machine learning models all in one platform.[...] Read the full article here.

The post SnapLogic Data Science brings self-service to machine learning appeared first on SnapLogic.

Categories: ETL

SnapLogic November 2018 Release: Revolutionize your business with intelligent integration

SnapLogic - Wed, 11/14/2018 - 07:29

We are thrilled to announce the general availability of the November 2018, 4.15 release of the SnapLogic integration platform. This release introduces several new solutions including SnapLogic Data Science, SnapLogic API Management, and SnapLogic for B2B Integration. It also includes core platform enhancements and new feature-rich Snaps. These powerful new capabilities will enable CIOs and[...] Read the full article here.

The post SnapLogic November 2018 Release: Revolutionize your business with intelligent integration appeared first on SnapLogic.

Categories: ETL

Liferay And...  Jackson

Liferay - Tue, 11/13/2018 - 18:27
Introduction

So often when discussing how to deal with dependencies, we're often looking for ways to package our third party jars into our custom modules.

There's good reason to do this. It ensures that our modules get a version of a third party jar that we've tested with. It also excludes ambiguity over where the dependency will come from, whether it is deployed and available or not, etc.

That said, there is another option that we don't really talk about much, even though it is still a viable one. Many third party jars are actually OSGi-ready and can be deployed as modules separately from your own custom modules.

Jackson, for example, is actually module jars on their own and can be deployed to Liferay just by dropping them into the deploy folder.

Dependencies Deployed as Modules

So why deploy a third party dependency jar as an OSGi module instead of just as an embedded jar?

Often it comes down to either a concern about class loaders or, less frequently, a (misguided) attempt to shrink general modules size. I usually say this is misguided because memory and disk consumption is cheap, and the problems (to be discussed below) often are not worth it.

So what about the class loader concern? Well, when you are using a system which uses class loaders to instantiate java classes, such as with Jackson and it marshaling JSON into Java objects with Jackson annotations, class loader hierarchies and the normal boundaries between OSGi modules can make general OSGi usage a challenge when the annotations are used in different bundles.

For example, if you have module A and module B and both have POJOs decorated with Jackson annotations, you could run into issues with the annotations. When performing a package scan for classes decorated with the annotations, if the annotation is loaded by a different class loader it is effectively a different class and may not be visible during annotation processing.

If your package is deployed as a standalone module, though, then all bundles sharing the dependency will pull from the same module and therefore the same class loader.

A downside of this, though, is with versioning. If you deploy Jackson 2.9.3 and 2.9.7, there are two competing versions available and can still lead to class loader issues when the different versions are used at the same time. In the case of just a single version deployed, then you have the typical concern of all modules stuck using an agreed upon version.

Is My Dependency OSGi Ready?

So the first thing you'll need to know is whether your third party dependency jar is an OSGi module or not.

The most complicated way to find out is by opening up your jar with a zip tool to look at the contents. If the jar is a bundle, the META-INF/MANIFEST.MF file will contain the OSGi headers like Bundle-Name, Bundle-SymbolicName, Bundle-Version, etc. Additionally you may have OSGi-specific files in the META-INF folder for declarative services.

An easier way is just to use one of the Maven repo search tools. When looking for jackson-core 2.9.6 in mvnrepository.com, you come across the page like https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-core/2.9.6:

Under the Files section, it is shown as a bundle w/ the size. This means it is an OSGi-ready bundle. When not OSGi-ready, the search tools will typically show it as just a jar.

This and the other Jackson jars are all marked as bundles, so I know I can deploy them as modules.

Deploying Jackson as Modules

So for a future "Liferay And..." blog post, I have need of Jackson as a module instead of as just a dependency, so in this post we're going to focus on deploying Jackson as modules. Sure this may not be necessary for all deployments or usage of Jackson, but it is for me.

Okay, so our test is going to be to build a couple of modules w/ some POJOs decorated with Jackson annotations and a module that will be marshaling to/from JSON. In order to do this, we need to have the following Jackson modules deployed:

  • compile group: 'com.fasterxml.jackson.core', name: 'jackson-core', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.core', name: 'jackson-annotations', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-jdk8', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-jsr310', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.module', name: 'jackson-module-parameter-names', version: '2.9.6'

Download these bundle jars and drop them into the Liferay deploy folder while Liferay is running. The bundles will deploy into Liferay and you'll see the messages in the log that the bundles are deployed and started.

Building Test Modules

So in the referenced Github repo, we'll build out a Liferay workspace with three module projects:

  1. Animals - Defines the POJOs with Jackson annotations for defining different pet instances.
  2. Persons - Defines a POJO for a person to define their set of pets.
  3. Mappings - Services based upon using Jackson to marshal to/from JSON.
  4. Gogo-Commands - Provides some simple Gogo commands that we can use to test the modules w/o building out a portlet infrastructure.

The Github repo is: https://github.com/dnebing/liferay-and-jackson

The animals and persons modules are nothing fancy, but they do leverage the Jackson annotations from the deployed OSGi Jackson modules.

The mappings module uses the Jackson ObjectMapper to handle the marshaling. It is capable of processing classes from the other modules.

The gogo module contains some simple gogo shell commands:

Command Description jackson:createCatCreates a Cat instance and outputs the toString representation of it. Args are [name [breed [age [favorite treat]]]]. jackson:createDogCreates a Dog instance and outputs the toString representation of it. Args are [name [breed [age [likes pigs ears]]]]. jackson:catJsonLike createCat, but outputs the JSON representation of the cat. jackson:dogJsonLike createDog, but outputs the JSON representation of the dog. jackson:catParses the given JSON into a Cat object and outputs the toString representation of it. The only argument is the JSON. jackson:dogParses the given JSON into a Dog object and outputs the toString representation of it. The only argument is the JSON.

For the cat and dog commands, to pass JSON as a single argument, enclose it in single quotes:

jackson:cat '{"type":"cat","name":"claire","age":6,"breed":"house","treat":"filets"}'

Conclusion

Seems like an odd place to stop, huh?

I mean, we have identified how to find OSGi-ready modules such as the Jackson modules, we have deployed them to Liferay, and we have built modules that depend upon them.

So why introduce Jackson like this and then stop? Well, it is just a preparatory blog for my next post, Liferay And... MongoDB. We'll be leveraging Jackson as part of that solution, so starting with the Jackson deployment is a good starting point.  See you in the next post!

David H Nebinger 2018-11-13T23:27:00Z
Categories: CMS, ECM

Hacktoberfest Celebrates 5th Anniversary

Open Source Initiative - Tue, 11/13/2018 - 07:25

Five years ago the community team at DigitalOcean wanted to create a program to inspire open source contributions. That first year, in 2014, the first Hacktoberfest participants were asked for 50 commits, and those who completed the challenge received a reward of swag. 676 people signed up and 505 forged ahead to the finish line, earning stickers and a custom limited-edition T-shirt.

This year that number is an astounding 46,088 completions out of 106,582 sign-ups. We’ve seen it become an entry point to developers contributing to open source projects: much more than a program, it’s clear that Hacktoberfest has become a global community movement with a shared set of values and passion for giving back.

To learn more about the results from the 5th anniversary of Hacktoberfest, please check this blog post from DigitalOcean:

https://blog.digitalocean.com/a-review-of-hacktoberfest-year-5/

The Open Source Initiative would like to thank DigitalOcean for not only for being a sponsor of the OSI and for hosting our website, but most importantly for creating such an inspiring program like Hacktoberfest. Happy 5th Anniversary, Hacktoberfest!

Categories: Open Source

Get that old school page editing touch back again

Liferay - Tue, 11/13/2018 - 04:56

At this years Unconference at DEVCON in Amsterdam, Victor Valle kept a session about changing the look-and-feel of the portal administration. Things like removing the default product menu and creating your own. I missed that discussion because so many other interesting discussions were going on, so I have no idea if someone brought this up. In this blog article, I'm not going to do anything "drastic" like like removing a side menu. I just want to show you how easy it is to get something back which some of us missed since version 6.2.

I've heard some people complain about the way the page editing works since version 7.0, and it seems like themelets are still pretty much an underrated or unknown addition to Liferay theming. I hope this blog will tackle both.

By default you'll have to hover over a portlet or widget to get the configuration options:

Nothing wrong with it. The page is displayed as anyone without editing rights would see the page. No need to toggle that icon which shows/hides the controls. But for those who have to do a lot of page editing or widget configuration, ... every second counts. We just want to click on the ellipsis icon to configure that widget without having to hover over it first.

Some of us want this:

There are different ways to get this toggle controls feature back

After all, the controls icon did not disappear. It only becomes visible when your screensize is small enough. So it's all a matter of tweaking the styles.

Option 1: Portlet decorators

You can easily add your own portlet decorator and make it the default one in your theme. You just need to add the custom decorator in the  look-and-feel.xml of your theme:

<portlet-decorator id="show-controls" name="Show Controls"> <default-portlet-decorator>true</default-portlet-decorator> <portlet-decorator-css-class>portlet-show-controls</portlet-decorator-css-class> </portlet-decorator>

Then you will want to add some css to the theme so the toggle controls icon is displayed at all time. Toglling the icon will add a css class "controls-visible" to the body element. This is easy, just display the "portlet-topper" everytime "controls-visible" is present.

Why this option is bad for this purpose: I believe portlet decorators are meant for styling purposes. How do you want the user to see the widget when you select a certain decorator. When you want to use portlet decorators for this purpose and you want to display widgets in different ways, with(out) borders or titles... This means you'll lose the controls anyway when you assign a different decorator to a widget. I bet you'll say: "Just put the extra styles on every widget". So just forget I even brought this up.

Option 2: Add some css and js

We'll add the custom css to the theme:

/* old school portlet decorators, don't mind the shameless usage of !important */ .control-menu .toggle-controls { display: block !important; } .controls-visible.has-toggle-controls { .portlet-topper { display: -webkit-box !important; display: -moz-box !important; display: box !important; display: -webkit-flex !important; display: -moz-flex !important; display: -ms-flexbox !important; display: flex !important; position: relative; opacity: 1; transform: none !important; } section.portlet { border: 1px solid #8b8b8b; border-radius: 0.5rem; } } .controls-hidden { .portlet-topper { display: none !important; } }

We're not there yet. We'll need to put a "has-toggle-controls" class on the body element because it seems "controls-visible" is there by default, even when you're not logged in. And in this case I want to display a border around the portlet when the controls are active. So I'll be setting my own css class like this inside main.js:

var toggleControls = document.querySelector('.toggle-controls'); if (toggleControls !== null) { document.body.classList.add('has-toggle-controls'); }

But what if we had lots of different themes. Are we really going to add the same code over and over again? And what if business decides to add a box-shadow effect after a few weeks?

Option 3: Themelets

A themelet is an extension of a theme. You can extend all your themes with the same themelet. When some style needs to change, you just edit the themelet and rebuild your themes. This will require you to use the liferay-theme-generator. But I bet everyone does by now, right? More information on how to build and use themelets.

You can find the themelet I wrote on github.

Please be sure your theme's _custom.scss imports the css from themelets:

/* inject:imports */ /* endinject */

And your portal_normal.ftl should contain:

<!-- inject:js --> <!-- endinject --> Conclusion

When changing the behaviour or design of certain aspects which are not design (in the eye of the end users or guest users) related, I think its always better to go with themelets. You'll rarely come across a portal with only one theme. So using themelets will make it so much easier to maintain your themes.

Michael Adamczyk 2018-11-13T09:56:00Z
Categories: CMS, ECM

Save time and prevent data-mapping errors with Vtiger’s multi-module picklists

VTiger - Tue, 11/13/2018 - 03:53
Imagine a scenario where a set of values need to be common to a couple of picklists across different objects or modules. For instance, the values for contact status need to be “Active”, “Inactive”, “Positive”, “Negative” or “Neutral” across two different modules namely – the Contact Module and the Opportunity Module. To make this happen, […]
Categories: CRM
Syndicate content