Assistance with Open Source adoption


Take Advantage of Elastic Stack to monitor Liferay DXP

Liferay - Tue, 10/23/2018 - 07:49

As many of you probably know, starting with Liferay DXP, Elasticsearch is the default Search Engine. In fact, by default, an Elasticsearch instance is embedded in Liferay DXP (it’s a good moment to remind everyone that this is not supported for Production environments, where a separate Elasticsearch instance must be created).

In this article we’re going to outline how to use the Elastic Stack in order to monitor Liferay DXP.

The first idea about this article was to create a step by step guide, but I decided to avoid this approach because it'd require very detailed configurations that in the end would be slightly different on each installation, and also because anyone who uses the Elastic Stack should get a minimum knowledge in order to be able to monitor things that are important for his/her particular situation, so the learning process is going to be beneficial.


So what’s the Elastic Stack?

Elastic Stack is a set of products created by Elastic, it’s also known as ELK, which comes from Elasticsearch, Logstash and Kibana.

  • Elasticsearch is the index engine where information is stored.

  • Logstash is a program that sends information to Elasticsearch.

  • Kibana is  a server where we can create dashboards and graphics from the information that Logstash stored on Elasticsearch.

In this example we are not going to alter, nor consume the indexes created on Elasticsearch for Liferay DXP, we are going to create our own indexes just for monitoring purposes.


Setting up an Elastic Stack environment.

Kibana is a central service for the ELK stack, so for practical purposes we chose to install it on the same server where Elasticsearch is installed. Also because Logstash has to access Liferay logs, we chose to install a Logstash service on each Liferay node server. Anyway this approach is not mandatory, you can install Kibana in a different server from Elasticsearch and point it to the right address. The following examples are based on a 2 node Liferay cluster (node1 and node2).


Collecting data with Logstash:

The first step for the ELK stack to work is to have Logstash collect data from Liferay logs and send it to Elasticsearch. We can create different pipelines for Logstash in order to define which data it should ingest and how to store it on Elasticsearch. Each pipeline is a json file where you can define three sections:

  • input → how the data is going to be collected, here we specify the Logstash plugin we are going to use and set its parameters.
  • filter → where we can alter the data that our plugin has collected before sending it to Elasticsearch
  • output → it indicates the endpoint where our Elasticsearch is and also the name of the index where it’s going to store the data (if it doesn’t exist, it’ll be created automatically).

We are going to focus on two different pipelines for this article, here is some general information about how to configure pipelines.


Liferay logs pipeline:

This pipeline collects the information from the Liferay logs using the file input plugin. This plugin reads the logs (specified in the input phase) and parses each line according to some conditions. In our case that parsing happens during the filter phase, where we tokenize the log message to extract the log level, java class, time, and all the data we want to extract from each log line. Finally, we choose in which index we want to store the extracted data using the output phase:

input { file { path => "/opt/liferay/logs/liferay*.log" start_position => beginning ignore_older => 0 type => "liferaylog" sincedb_path => "/dev/null" codec => multiline { pattern => "^%{TIMESTAMP_ISO8601}" negate => true what => previous } } } filter { if [type] == "liferaylog" { grok { match => { "message" => "%{TIMESTAMP_ISO8601:realtime}\s*%{LOGLEVEL:loglevel}\s*\[%{DATA:thread}\]\[%{NOTSPACE:javaclass}:%{DATA:linenumber}\]\s*%{GREEDYDATA:logmessage}" } tag_on_failure => ["error_message_not_parsed"] add_field => { "hostname" => "LIFERAY_SERVER_HOSTNAME"} } date { match => [ "realtime", "ISO8601" ] timezone => "UTC" remove_field => ["realtime"] } } } output { if [type] == "liferaylog" { elasticsearch { hosts => ["ELASTIC_SERVER_HOSTNAME:9200"] index => "logstash-liferay-log-node1-%{+YYYY.MM.dd}" } #stdout { codec => rubydebug } } }


JVM statistics pipeline:

The previous plugin is installed by default when we install Logstash, but the plugin we'll use to take JVM statistics (logstash-input-jmx) has to be installed manually.

When using this plugin, we should tell Elasticsearch that the information this plugin sends to a particular index has a decimal format, otherwise Elasticsearch will infer it based on the first data it‘ll receive, which could be interpreted as a long value.

To configure this we can execute a simple CURL to set up this information. We’ll make a HTTP call to Elasticsearch with some JSON data to tell Elasticsearch that all indexes beginning with “logstash-jmx-node” will treat its values as doubles. We just need to do this once and then Elasticsearch will know how to deal with our JMX data:

curl -H "Content-Type: application/json" -XPUT ELASTIC_SERVER_HOSTNAME:9200/_template/template-logstash-jmx-node* -d ' { "template" : "logstash-jmx-node*", "settings" : { "number_of_shards" : 1 }, "mappings": { "doc": { "properties": { "metric_value_number" : { "type" : "double" } } } } }'

Before using this plugin, we will also need to enable the JMX connection in the Liferay JVM. For example, if running on Tomcat, we can add this on the


To set up this plugin we need to create two different configuration files, the usual Logstash pipeline configuration and a JMX pipeline configuration which location is specified in the Logstash pipeline, in the jmx node:

  • Logstash pipeline: here we specify where the jmx configuration file (/opt/logstash/jmx) is and which index we are going to use.
input { jmx { path => "/opt/logstash/jmx" polling_frequency => 30 type => "jmx" nb_thread => 3 } } filter { mutate { add_field => { "node" => "node1" } } } output { if [type] == "jmx" { elasticsearch { hosts => [ "ELASTIC_SERVER_HOSTNAME:9200" ] index => "logstash-jmx-node1-%{+YYYY.MM.dd}" } } }
  • The JMX Configuration file:  where we decide which JMX statistics we want to collect and send to the index:
{ "host" : "localhost", "port" : 5000, "alias" : "reddit.jmx.elasticsearch", "queries" : [ { "object_name" : "java.lang:type=Memory", "object_alias" : "Memory" }, { "object_name" : "java.lang:type=Threading", "object_alias" : "Threading" }, { "object_name" : "java.lang:type=Runtime", "attributes" : [ "Uptime", "StartTime" ], "object_alias" : "Runtime" },{ "object_name" : "java.lang:type=GarbageCollector,name=ParNew", "object_alias" : "ParNew" },{ "object_name" : "java.lang:type=GarbageCollector,name=ConcurrentMarkSweep", "object_alias" : "MarkSweep" },{ "object_name" : "java.lang:type=OperatingSystem", "object_alias" : "OperatingSystem" },{ "object_name" : "com.zaxxer.hikari:type=Pool (HikariPool-1)", "object_alias" : "Hikari1" },{ "object_name" : "com.zaxxer.hikari:type=Pool (HikariPool-2)", "object_alias" : "Hikari2" },{ "object_name" : "Catalina:type=ThreadPool,name=\"http-nio-8080\"", "object_alias" : "HttpThread" },{ "object_name" : "java.lang:type=MemoryPool,name=Metaspace", "object_alias" : "Metaspace" },{ "object_name" : "java.lang:type=MemoryPool,name=Par Eden Space", "object_alias" : "Eden" },{ "object_name" : "java.lang:type=MemoryPool,name=CMS Old Gen", "object_alias" : "Old" },{ "object_name" : "java.lang:type=MemoryPool,name=Par Survivor Space", "object_alias" : "Survivor" }] }


Monitoring with Kibana:

Once all this information is being processed and indexed, it's time to create dashboards and visualizations on Kibana.

First we have to point Kibana to the Elasticsearch index whose data we are going to consume. We’ll use the kibana.yml configuration file for this purpose.



We are also going to install a plugin (logtrail) that allows us to see the logs, in a similar way a tail instruction does, via Kibana’s UI. This way, we can share logs with developers, sysadmins, devops, project managers… without having to give all those people access to the actual server where the logs are.



How to create visualizations and dashboards

Once we have pointed Kibana to the index and installed Logtrail, then we can start creating dashboards and visualizations in Kibana. To create dashboards and visualizations is easy in Kibana, we just need to use the UI. The steps we need to follow to create a visualization are:

  1. Indicate which index patterns we are going to deal with. In our case we wanted to use JMX and log information from two separate Liferay nodes. In our example:




  2. Create a search query in Kibana using Lucene queries. In this example we’re retrieving Process and System CPU Loads retrieved via JMX:

    metric_path: "reddit.jmx.elasticsearch.OperatingSystem.ProcessCpuLoad" || metric_path: "reddit.jmx.elasticsearch.OperatingSystem.SystemCpuLoad"
  3. Create different visual components to show that information.There are different kind of visualizations we can create, like an histogram showing the cpu usage we have recorded using the JMX plugin:

    Or a table counting the times a java class appears in the log with the ERROR loglevel:

  4. Create dashboards that group all the visualizations you want. Dashboards are collections of visualizations:

To conclude:

The ELK stack is a very powerful way to monitor and extract information of running Liferay instances, and in my opinion the main advantages of this system are:

  • Democratize logs: so everybody can access them, not only sysadmins. The logs can also be searched easily.

  • Historical of JMX stats, so it’s possible to know how the CPU, memory, database pools were on a given time.

I hope I have convinced you that ELK can help you monitor your Liferay DXP installations, and you feel you're ready to start creating the monitoring system that covers your needs.

Eduardo Perez 2018-10-23T12:49:00Z
Categories: CMS, ECM

LiferayPhotos: a fully functional app in 10 days using Liferay Screens. Part 2 Architecture and planning

Liferay - Tue, 10/23/2018 - 05:36

In the previous blog post we covered the beginning of the project, this includes a description of the app features, advantages of using Liferay Screens and lastly the wireframe.

In this one, we will cover how we can “screenletize” the different screens of the application and build them combining different screenlets. We will also organise the 10 days and plan what are the tasks we are going to work on each day.

“Screenletizing” the app

In this step, we have to take each screen and think about which screenlet we can use to build each part, this way we can combine those screenlets together to build the final screen. Before we continue, you can check out available screenlets here.

We will extract these images from the prototype included in the previous post.

Log In

In this case, the solution is very straightforward, we need the Login Screenlet.

Let’s cover a more complex example


We have a feed screen where the user will see all the pictures that are uploaded onto the platform with its description and number of likes.

This screen is basically a list of images, but each image has a lot of additional information (user, user image, number of likes, description). To render a List of Images we have a dedicated screenlet, the ImageGalleryScreenlet.

The default version of the ImageGalleryScreenlet only displays a list of images in a Grid, but as we learned in the previous post, we can easily customise the UI.

We have to display more information in each image cell, and… exactly, we will also be using screenlets to do this.



  • To render the user information we will be using the Asset Display Screenlet. This screenlet will show any asset of Liferay and, depending on the inner type, it will choose a specific UI. In this case, as it is a user we will render the name and the profile picture, and of course, to render the profile picture we will use another screenlet, the Userportrait Screenlet.
  • To display the likes, we have another screenlet: the Rating Screenlet. In this case, we don’t have a default UI that represent likes, so we will need to create one.

The great thing about separating view parts into screenlets is that it will allow to reuse them in other parts of the application, or even in other applications.


After we have finished extracting screenlets of our app, we can create a schedule for the development of the application.

The app will be developed by two people. One will be working part-time and the other full time. The person that will be developing it full time is Luis Miguel Barcos, our great intern. He doesn’t have previous experience using Liferay Screens, which will make the journey even more interesting. He will write the next blog post talking about the experience and a more technical look at the project.

Here is the detailed planning of the app:


In this blog post, we have shown how we are going to “screenletize” the application in order to divide it into small and reusable pieces. We have also created a planning scheme to organise the 10 days we have to work on this app.

Be sure to check out the next and last blog post with the final result.


Victor Galan 2018-10-23T10:36:00Z
Categories: CMS, ECM

LiferayPhotos: a fully functional app in 10 days using Liferay Screens. Part 1 Introduction 

Liferay - Tue, 10/23/2018 - 05:28

Before we begin with this story, for those of you that don’t know about Liferay Screens. Liferay Screens is a library for iOS, Android and Xamarin that aims to speed up the development of native applications that use Liferay as a backend. It’s basically a collection of components, called Screenlets. These components are very easy to connect to your Liferay instance and provide plenty of features.

This is going to be a series of 3 blog post describing the creation of an iOS application: LiferayPhotos. We are going to describe how we did it from beginning to end and what are the advantages of using Liferay Screens in your projects. All within a 10-day timeframe.

This first blog post will cover the first aspects of the application, from the initial idea to the mockups of how the application should be.


The application that we want to build is inspired by a well-known app: Instagram. We want to build an application where a user can:

  • Log in.

  • Sign up.

  • Upload pictures, and add stickers too!

  • See other people’s uploaded pictures.

  • Comment pictures.

  • Like pictures.

  • Be able to chat with other users.

  • See your profile with your uploaded pictures.

With all of this in mind, let's move on to the decision of using Liferay Screens to develop this application.

Why use Liferay Screens?

As you have read in the title, we are going to build this app in only 10 days, yes you have read it right. This is one of the greatest benefits of using Liferay Screens, it makes the development of apps really fast. But… how is this even possible? As I mentioned earlier, the library contains a set of ready-to-use components to cover most of our use cases.

Are you tired of implementing a login screen over and over again? By just using a Login Screenlet, adding a couple of line of code, and configuring it with your Liferay parameters, it’s done! You now have a working login screen in your application.

Okay, this seems promising, but what if the UI doesn’t fit your brand style? No problem whatsoever, the screenlets are designed to support this scenario, UI and business logic are decoupled so you can change it easily without having to rewrite the logic.

Last thing you might ask, what if the use case I want it’s not supported by the available screenlets? This is not a problem either, within Liferay Screens there is also a toolkit to build your own screenlets and reuse them in your projects.


Before working on the app, we have to visualize the application wireframe to split and schedule the task involved. We have created a basic prototype of the different screens of the application:


As you can see here, there are a lot of advantages to using Liferay Screens in your developments. But this is not all… in the next two posts, we will be unveiling other interesting parts of the project and its lifecycle.

Check out the next post of the series

Victor Galan 2018-10-23T10:28:00Z
Categories: CMS, ECM

Asset Display Contributors in Action

Liferay - Sun, 10/21/2018 - 14:19

Display pages functionality in Liferay always was tightly coupled to the Web Content articles, we never had plans to support more or less the same technology for other types of assets even though we had many of these types: Documents & Media, Bookmarks, Wiki, etc... Even User is an asset and every user has corresponding AssetEntry in the database. But for Liferay 7.1 we decided to change this significantly, we introduced a new concept for the Display Pages, based on Fragments, very flexible, much more attractive than the old ones and...we still support only Web Content articles visualization :). Good news for the developers is that the framework is extensible now and it is easy to implement an AssetDisplayContributor and visualize any type of asset using our great display pages, based on fragments and in this article I want to show you how to do it with an example.

Let's imagine that we want to launch a recruitment site, typical one with tons of job-offers, candidates profiles, thematic blogs etc. One of the main functionalities must be a candidate profile page - some sort of landing page with the basic candidate information, photo, personal summary, and skills. And this task can be solved using new Display Pages.

As I mentioned before, User is an Asset in Liferay and there is a corresponding AssetEntry for each User, it is good since for now, we support visualization only for the Asset Entries. To achieve our goal we need two things, first - an AssetDisplayContributor implementation for the User, to know which fields are mappable and which values correspond to those fields, and second - custom friendly URL resolver to be able to get our users profile page by a friendly URL with the user's screen name in it.

 Let's implement the contributor first, it is very simple(some repeated code is skipped, the full class available on GitHub):

@Component(immediate = true, service = AssetDisplayContributor.class) public class UserAssetDisplayContributor implements AssetDisplayContributor { @Override public Set<AssetDisplayField> getAssetDisplayFields( long classTypeId, Locale locale) throws PortalException { Set<AssetDisplayField> fields = new HashSet<>(); fields.add( new AssetDisplayField( "fullName", LanguageUtil.get(locale, "full-name"), "text"));   /* some fields skipped here, see project source for the full implementation */ fields.add( new AssetDisplayField( "portrait", LanguageUtil.get(locale, "portrait"), "image")); return fields; } @Override public Map<String, Object> getAssetDisplayFieldsValues( AssetEntry assetEntry, Locale locale) throws PortalException { Map<String, Object> fieldValues = new HashMap<>(); User user = _userLocalService.getUser(assetEntry.getClassPK()); fieldValues.put("fullName", user.getFullName());   /* some fields skipped here, see project source for the full implementation */       ServiceContext serviceContext = ServiceContextThreadLocal.getServiceContext(); fieldValues.put( "portrait", user.getPortraitURL(serviceContext.getThemeDisplay())); return fieldValues; } @Override public String getClassName() { return User.class.getName(); } @Override public String getLabel(Locale locale) { return LanguageUtil.get(locale, "user"); } @Reference private UserLocalService _userLocalService; }

As you can see, there are two main methods - getAssetDisplayFields which defines the set of AssetDisplayField objects, with the field name, label and the type (for the moment we support two types - text and image trying to convert to text all non-text values, like numbers, booleans, dates, and lists of strings) and getAssetDisplayFieldsValues which defines the values for those fields using specific AssetEntry instance.  There is a possibility to provide different field sets for the different subtypes of entities like we do it for the different Web Content structures, using the classTypeId parameter.

The second task is to implement corresponding friendly URL resolver to be able to get our profiles by users screen name. Here I'll show only the implementation of the getActualURL method of FriendlyURLResolver interface because it is the method that matters, but the full code of this resolver is also available in GitHub.

@Override public String getActualURL( long companyId, long groupId, boolean privateLayout, String mainPath, String friendlyURL, Map<String, String[]> params, Map<String, Object> requestContext) throws PortalException { String urlSeparator = getURLSeparator(); String screenName = friendlyURL.substring(urlSeparator.length()); User user = _userLocalService.getUserByScreenName(companyId, screenName); AssetEntry assetEntry = _assetEntryLocalService.getEntry(User.class.getName(), user.getUserId()); HttpServletRequest request = (HttpServletRequest)requestContext.get("request"); ServiceContext serviceContext = ServiceContextFactory.getInstance(request); AssetDisplayPageEntry assetDisplayPageEntry = _assetDisplayPageEntryLocalService.fetchAssetDisplayPageEntry( assetEntry.getGroupId(), assetEntry.getClassNameId(), assetEntry.getClassPK()); if (assetDisplayPageEntry == null) { LayoutPageTemplateEntry layoutPageTemplateEntry = _layoutPageTemplateEntryService. fetchDefaultLayoutPageTemplateEntry(groupId, assetEntry.getClassNameId(), 0); _assetDisplayPageEntryLocalService.addAssetDisplayPageEntry( layoutPageTemplateEntry.getUserId(), assetEntry.getGroupId(), assetEntry.getClassNameId(),   assetEntry.getClassPK(), layoutPageTemplateEntry.getLayoutPageTemplateEntryId(), serviceContext); } String requestUri = request.getRequestURI(); requestUri = StringUtil.replace(requestUri, getURLSeparator(), "/a/"); return StringUtil.replace( requestUri, screenName, String.valueOf(assetEntry.getEntryId())); }

The key part here is that we need to know which AssetDisplayPageEntry corresponds to the current user. For the Web Content articles, we have a corresponding UI to define Display Page during the content editing. In the case of User, it is also possible to create the UI and save the ID of the page in the DB but to make my example simple I prefer to fetch default display page for the User class and create corresponding AssetDisplayPageEntry if it doesn't exist. And at the end of the method, we just redirect the request to our Asset Display Layout Type Controller to render the page using corresponding page fragments.

That's it. There are more tasks left, but there is no need to deploy anything else. Now let's prepare the fragments, create a Display Page and try it out! For our Display Page, we need 3 fragments: Header, Summary, and Skills. You can create your own fragments with editable areas and map them as you like, but in case if you are still not familiar with the new Display Pages mapping concept I recommend you to download my fragments collection and import them to your site.

When you have your fragments ready you can create a Display Page, just go to Build -> Pages -> Display Pages, click plus button and put the fragments in the order you like. This is how it looks using my fragments: 

Clicking on any editable area(marked with the dashed background) you can map this area to any available field of available Asset Type(there should be 2 - Web Content Article and User). Choose User type and map all the fields you would like to show on the Display Page and click Publish button. After publishing it is necessary to mark our new Display Page as default for this Asset Type, this action is available in the kebab menu of the display page entry:

Now we can create a user and try our new Display Page. Make sure you specified all the fields you mapped, in my case the fields are - First name, Last name, Job title, Portrait, Birthday, Email, Comments(as a Summary), Tags(as Skills list) and Organization(as Company). Save the user and use it's screen name to get the final result:

It is possible to create a corresponding AssetDisplayContributor for any type of Asset and use it to visualize your assets in a brand new way using Fragment-based Display Pages.

Full code of this example is available here.

Hope it helps! If you need any help implementing contributors for your Assets feel free to ask in comments.

Pavel Savinov 2018-10-21T19:19:00Z
Categories: CMS, ECM

Fragments extension: Fragment Entry Processors

Liferay - Fri, 10/19/2018 - 00:02

In Liferay 7.1 we presented a new vision to the page authoring process. The main idea was to empower business users to create pages and visualize contents in a very visual way, without a need to know technical stuff like Freemarker or Velocity for the Web Content templates. To make it possible we introduced the fragment concept.

In our vision fragment is a construction block which can be used to build new content pages, display pages or content page templates. Fragment consists of HTML markup, CSS stylesheet, and Javascript code.

Despite the fact that we really wanted to create a business-user-friendly application, we always remember about our strong developers community and their needs. Fragments API is extensible and allows you to create your custom markup tags to enrich your fragments code and in this article, I would like to show you how to create your own processors for the fragments markup.

As an example, we are going to create a custom tag which shows a UNIX-style fortune cookie :). Our fortune cookie module has the following structure:

We use Jsoup library to parse fragments HTML markup so we have to include it into our build file(since it doesn't come within the Portal core) among other dependencies. 

sourceCompatibility = "1.8" targetCompatibility = "1.8" dependencies { compileInclude group: "org.jsoup", name: "jsoup", version: "1.10.2" compileOnly group: "com.liferay", name: "com.liferay.fragment.api", version: "1.0.0" compileOnly group: "com.liferay", name: "com.liferay.petra.string", version: "2.0.0" compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "3.0.0" compileOnly group: "javax.portlet", name: "portlet-api", version: "3.0.0" compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1" compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0" }

OSGi bnd.bnd descriptor has nothing special because we don't export any package and don't provide any capability:

Bundle-Name: Liferay Fragment Entry Processor Fortune Bundle-SymbolicName: com.liferay.fragment.entry.processor.fortune Bundle-Version: 1.0.0

Every Fragment Entry Processor implementation has two main methods, first one - to process fragments HTML markup and second - to validate the markup to avoid saving fragments with invalid markup.

/** * @param fragmentEntryLink Fragment Entry link object to get editable * values needed for a particular case processing. * @param html Fragment markup to process. * @param mode Processing mode (@see FragmentEntryLinkConstants) * @return Processed Fragment markup. * @throws PortalException */ public String processFragmentEntryLinkHTML( FragmentEntryLink fragmentEntryLink, String html, String mode) throws PortalException; /** * @param html Fragment markup to validate. * @throws PortalException In case of any invalid content. */ public void validateFragmentEntryHTML(String html) throws PortalException;

FragmentEntryLink object gives us access to the particular fragment usage on a page, display page or page template and can be used if we want our result to depend on the particular usage parameters. Mode parameter can be used to give additional processing(or remove unnecessary processing) options in the EDIT(or VIEW) mode.

In this particular case, we don't need the validation method, but we have a good example in the Portal code.

Let's implement our fortune cookie tag processor! The only thing we have to do here is to iterate through all fortune tags we meet and replace them with a random cookie text. As I mentioned before, we use Jsoup to parse the markup and work with the document:

@Override public String processFragmentEntryLinkHTML( FragmentEntryLink fragmentEntryLink, String html, String mode) { Document document = _getDocument(html); Elements elements = document.getElementsByTag(_FORTUNE); Random random = new Random(); elements.forEach( element -> { Element fortuneText = document.createElement("span"); fortuneText.attr("class", "fortune"); fortuneText.text(_COOKIES[random.nextInt(7)]); element.replaceWith(fortuneText); }); Element bodyElement = document.body(); return bodyElement.html(); } private Document _getDocument(String html) { Document document = Jsoup.parseBodyFragment(html); Document.OutputSettings outputSettings = new Document.OutputSettings(); outputSettings.prettyPrint(false); document.outputSettings(outputSettings); return document; } private static final String[] _COOKIES = { "A friend asks only for your time not your money.", "If you refuse to accept anything but the best, you very often get it.", "Today it's up to you to create the peacefulness you long for.", "A smile is your passport into the hearts of others.", "A good way to keep healthy is to eat more Chinese food.", "Your high-minded principles spell success.", "The only easy day was yesterday." }; private static final String _FORTUNE = "fortune";


That is it. After deploying this module to our Portal instance, fortune tag is ready to use in the Fragments editor:

It is up to you how to render your personal tag, which attributes to use, which technology to use to process tags content. You can even create your own script language, or apply the one which you already have in your CMS to avoid massive refactoring and use existing templates as-is.

Full Fortune Fragment Entry Processor code can be found here.

Hope it helps!

Pavel Savinov 2018-10-19T05:02:00Z
Categories: CMS, ECM

Using BOMs to Manage Liferay Dependency Versions

Liferay - Wed, 10/17/2018 - 15:15

Liferay is a large project, and many developers who are attempting to get their customizations to work with Liferay will often end up asking the question, "What version of module W should I use at compile time when I'm running on Liferay X.Y.Z?" To answer that question, Liferay has some instructions on how to find versions in its document, Configuring Dependencies.

This blog entry is really to talk about what to do in situations where those instructions just aren't very useful.

Path to Unofficial BOMs

First, a little bit of background, because I think context is useful to know, though you can skip it if you want to get right to working with BOMs.

Back in late 2016, I started to feel paranoid that we'd start introducing major version changes to packages in the middle of a release and nobody would notice. To ease those concerns, I wrote a tool that indexed all of the packageinfo files in Liferay at each tag, and then I loaded up these metadata files with a Jupyter notebook and did a check for a major version change.

Then, like many other is it worth the time problems, it evolved into a script that I'd run once a week, and a small web-based tool so that I wouldn't have to fire up Jupyter every time I needed to check what was essentially static information.

Fast forward to February 2017, and our internal knowledge base was updated to allow for a new wiki format which (accidentally?) provided support for HTML with script tags. So, I chose to share my mini web-based tool to the wiki, which then lead our support team in Spain to share a related question that they'd been pondering for awhile.

Imagine if you happened to need to follow the Liferay document, Configuring Dependencies, for a lot of modules. Doesn't that lookup process get old really fast?

So, given that it was clearly possible to create an unofficial reference document for every Liferay exported package, wouldn't it be nice if we could create an official reference document that recorded every Liferay module version?

Since I had all of the metadata indexed anyway, I put together a separate tool that displayed the information stored in bnd.bnd files at every tag, which sort of let you look up module version changes between releases. This let people get a sense for what an official reference document might look like.

(Note: I learned a year later that bnd.bnd files are not the correct file to look at if you want to know the version at a given tag. Rather, you need to look at the files saved in the modules/.releng folder for that information. So in case it helps anyone feel better, Liferay's release and versioning process isn't obvious to anyone not directly involved with the release process, whether you're a Liferay employee or not.)

From the outside looking in, you might ask, why is it that our team didn't ask for Liferay to officially provide a "bill of materials" (BOMs), as described in the Introduction to Dependency Mechanism in the official Maven documentation? That way, you'd only specify the version of Liferay you're using, and the BOM would take care of the rest. If such a BOM existed, a lookup tool would be completely unnecessary.

Well, that's how the request actually started at the time of DXP's release, but since it wasn't happening, it got downgraded to a reference document which looked immediately achievable.

Fast forward to today. Still no official reference document for module versions, still no official Liferay BOMs.

However, by chance, I learned that the IDE team has been evaluating unofficial BOMs currently located on These BOMs were generated as proof of concepts on what that BOM might include, and are referenced in some drafts of Liferay IDE tutorials. Since I now had an idea of what the IDE team itself felt a BOM should look like, I updated my web-based tool to use all of the collected metadata to dynamically generate BOMs for all indexed Liferay versions.

Install the Unofficial BOM

For sake of an example, assume that you want to install release-dxp-bom-7.1.10.pom.

The proof of concept for this version exists in the liferay-private-releases repository of, and Liferay employees can setup access to that repository to acquire that file. Since there are no fix packs, it is also functionally equivalent to the original 7.1.0 GA1 release, and you can use instead.

However, if you wish to use a version for which a proof of concept has not been generated (or if you're a non-employee wanting to use an unofficial DXP BOM), you can try using Module Version Changes Since DXP Release to use the indexed metadata and generate a BOM for your Liferay version. If you go that route, open up the file in a text editor, and you should find something that looks like the following near the top of the file:

<groupId>com.liferay</groupId> <artifactId></artifactId> <version>7.1.10</version> <packaging>pom</packaging>

With those values for the GROUP_ID, ARTIFACT_ID, VERSION, and PACKAGING, you would install the BOM to your local Maven repository by substituting in the appropriate values into the following mvn install:install-file command:

mvn install:install-file -DgroupId="${GROUP_ID}" \ -DartifactId="${ARTIFACT_ID}" -Dversion="${VERSION}" \ -Dpackaging="${PACKAGING}"

And that's basically all you need to do when installing a BOM that's not available in any repository you can access.

Install Multiple BOMs

If you only have a handful of BOMs, you could repeat the process mentioned above for each of your BOMs. If you have a lot of BOMs to install (for example, you're a Liferay employee that might need to build against arbitrary releases, and you decided to use the earlier linked page and generate something for every Liferay fix pack), you may want to script it.

To keep things simple for pulling values out of an XML file, you should install the Python package yq, which provides you with the tool xq, which provides you with XML processing at the command-line. This tool is similar to the popular tool jq, which provides you with JSON processing at the command line.

pip install yq

Once yq is installed, you can add the following to a Bash script to install the auto-generated BOMs to your local Maven repository:

#!/bin/bash install_bom() { local GROUP_ID=$(cat ${1} | xq '.project.groupId' | cut -d'"' -f 2) local ARTIFACT_ID=$(cat ${1} | xq '.project.artifactId' | cut -d'"' -f 2) local VERSION=$(cat ${1} | xq '.project.version' | cut -d'"' -f 2) local PACKAGING=$(cat ${1} | xq '.project.packaging' | cut -d'"' -f 2) mvn install:install-file -Dfile=${1} -DgroupId="${GROUP_ID}" \ -DartifactId="${ARTIFACT_ID}" -Dversion="${VERSION}" \ -Dpackaging="${PACKAGING}" echo "Installed ${GROUP_ID}:${ARTIFACT_ID}:${VERSION}:${PACKAGING}" } for bom in *.pom; do install_bom ${bom} done Use BOMs in Blade Samples

In case you've never used a BOM before, I'll show you how you would use them if you were to build projects in the Blade Samples repository.

Reference the BOM in Maven

First, update the parent pom.xml so that child projects know which dependency versions are available by simply adding the BOM as a dependency.

<dependencyManagement> <dependencies> <dependency> <groupId>com.liferay.portal</groupId> <artifactId></artifactId> <version>7.1.10</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>

We set this to be an import scope dependency, so that we don't have to download all of Liferay's release artifacts just to have the version numbers (they will be downloaded as-needed when the specific artifacts are resolved).

Then, in order to have a child project use this version, simply update the pom.xml in the child project to not include the version explicitly for any of the dependencies.

<dependency> <groupId>com.liferay.portal</groupId> <artifactId>com.liferay.portal.kernel</artifactId> <scope>provided</scope> </dependency>

As noted in Introduction to Dependency Mechanism, the version specified in the parent POM will then be chosen as the dependency version. Running mvn package in the child project will then download the actual versions for the noted Liferay release.

Note: If this is the first time you've run any Maven commands in the liferay-blade-samples repository, you'll want to make sure that all of the parent projects are installed or the build will fail. If you are new to Maven and aren't sure how to read pom.xml files, this is achieved with the following steps:

  1. Run mvn -N install in liferay-blade-samples/maven
  2. Run mvn install in liferay-blade-samples/parent.bnd.bundle.plugin.
Reference the BOM in Gradle

Liferay workspace uses an older version of Gradle, and so BOMs aren't supported by default. To get support for BOMs, we'll first need to bring in io.spring.gradle:dependency-management-plugin:1.0.6.RELEASE.

The first step in doing this is to update the parent build.gradle so that Gradle knows where to find the plugin.

buildscript { dependencies { ... classpath group: "io.spring.gradle", name: "dependency-management-plugin", version: "1.0.6.RELEASE" } ... }

The next step in doing this is to update the parent build.gradle so that Gradle makes sure to apply the plugin to each child project.

subprojects { subproject -> ... apply plugin: "io.spring.dependency-management" ... }

Because we're installing the BOMs to our local Maven repository, the next step in doing this is to update the parent build.gradle so that Gradle knows to check that local repository. We can then also add the BOM to a dependencyManagement block.

subprojects { subproject -> ... repositories { mavenLocal() ... } dependencyManagement { imports { mavenBom "" } } ... }

Then, in order to have a child project use this version, simply update the build.gradle in the child project to not include the version explicitly for any of the dependencies.

dependencies { compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel" ... } Minhchau Dang 2018-10-17T20:15:00Z
Categories: CMS, ECM

We want to invite you to DEVCON 2018

Liferay - Tue, 10/16/2018 - 10:04

Every year we, the developers doing amazing things with Liferay's products, have this unique opportunity to meet, learn and enjoy those long technical discussions with each other. All that happens at DEVCON - our main developer conference which is taking place in Amsterdam from November 6th to 8th. This year's agenda is filled with sessions delivering in depth technical details about the products, presenting  new technologies and showcasing spectacular use cases.

Following the tradition, DEVCON starts with an un-conference, the day before the main conference. That is a full day consisting solely of the best parts of any traditional conference - the meetings and conversations at the halls between the talks. It's a day full of discussions with experts and colleagues on topics that attendees bring in.

We try to keep the prices for DEVCON at reasonable level and provide several kind of promotions for partners and organizations we have business relationships with. Yet there are talented developers in our community who are working alone or for non-profit organizations or in not so well developed parts of this world or for whatever reason can not afford a DEVCON ticket. This year we want to help some of them. 

We have free DEVCON tickets to give away!

As much as we would love to invite the whole community, we have to live by the market rules! So we only have limited number of tickets. To help us decide, please send an email to with "Free DEVCON ticket" subject and tell us why you think you should be one of the people we give that free ticket to. We will decide between those that have the most convincing, creative and fun reasons.

See you in Amsterdam!

David Gómez 2018-10-16T15:04:00Z
Categories: CMS, ECM

Gradle Plugin for manage properties files

Liferay - Mon, 10/15/2018 - 13:01

GitHub repositoty:


This gradle plugin let you manage your properties files on your liferay workspace (version 7.0 and 7.1).

Blade tool initializes this kind of workpace with one folder named configs. There are some folder into configs folder:

  • common
  • dev
  • local
  • prod
  • uat

It's very common that you need to keep different values for the same properties depends on your environment. This plugin try to help you to manage this settup: copying all properties files from one common folder to each environment folder and replacing all properties found in filters file to the correct value.

How to use

First you will need the plugin jar file. You could download latest version from (Maven Central Version coming soon) or download source code from this github and to compile it. If you download jar file you will need move this to correct path in your local repository (gradle coordenates are devtools.liferay:portal-properties:1.0.0). Else if you download source code and compile it you will need to execute install maven task to install jar file on correct path in your local repository.

After jar file is fetched you will need to set up your liferay workspace. You will need to create two newely folder. You can create these folder in path you want but we recommend created into common folder (in configs folder).

Now you will need to set up this plugin in your build.gradle file. You will need add these line to build.gradle file:

buildscript { dependencies { classpath group: "devtools.liferay", name: "portal-properties", version: "1.0.0" } repositories { mavenLocal() maven { url "" } } } apply plugin: "devtools-liferay-portal-properties" buildproperties { descFolderPath = 'configs' originFolderPath = 'configs/common/origin' keysFolderPath = 'configs/common/keys' } build.finalizedBy(buildproperties)

In this example we're going to use configs/common/origin folder to keep original properties file with pattern, and configs/common/keys folder to keep different values for properties. In details:

  • Dependencies: Gradle coordenates of DevTools Liferay Portal Properties is devtools.liferay:portal-properties:1.0.0.
  • Repositories: you will need mavenLocal repository because you've moved plugin jar file to your maven local repository.
  • Apply plugin: DevTools Liferay Portal Properties plugin id is devtools-liferay-portal-properties.
  • BuildProperties: In this section we will put all configuration parameters. In 1.0.0 release we have:
    • descFolderPath: Path where properties file will be copied and properties will be replaced.
    • originFolderPath: Location of original properties file (with ${} filter params).
    • keysFolderPath: Location of filter properties file.
  • build.finaluzedBy: With this command we can execute this plugin on build stage and not only on buildproperties.

It's time to add your properties files.

In the example we've created 4 filter file on keysFolderPath folder (configs/common/keys):

  • The content of these files are very similar (

File name (without .properties extension) must be equals to environment folder on descFolderPath folder.

In the example we've created only one properties file on originFolderPath folder (configs/common/origin). But we'ld put more properties files and all of then would be copied and replaced. on configs/common/origin:

testKey=testValue test1Key=${test1}

Now you are be able to generated your filtered by environment with buildproperties gradle task, or standar build gradle task.

gradle buildproperties gradle build

This is a common log of process:

:buildproperties Build properties task...configs Settings: destination folder path: configs origin folder path: configs/common/origin keys folder path: configs/common/keys Parsing dev environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\ to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\dev WARNING: Property not found in file on dev folder (${test1}) WARNING: Property not found in file on dev folder (${test2}) Parsing local environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\ to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\local Parsing prod environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\ to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\prod WARNING: Property not found in file on prod folder (${test1}) Parsing uat environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\ to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\uat WARNING: Property not found in file on uat folder (${test1}) BUILD SUCCESSFUL Total time: 0.275 secs

You will see WARNING log when you have some properties on your original properties files and you haven't filter for these properties on your filter properties files.

You could review Liferay Test project in

Ignacio Roncero Bazarra 2018-10-15T18:01:00Z
Categories: CMS, ECM

Joomla 3.8.13 Release

Joomla! - Tue, 10/09/2018 - 08:45

Joomla 3.8.13 is now available. This is a security release for the 3.x series of Joomla which addresses 5 security vulnerabilities.

Categories: CMS

Liferay Screens meets React Native, the sequel

Liferay - Mon, 10/08/2018 - 11:01

First of all, for those of you who don't know about Liferay Screens. Liferay Screens is a component library based on components called Screenlets. A screenlet is a visual component that you insert into your native app to leverage Liferay’s content and services, allowing you to create complex native applications for iOS and Android very fast. Awesome, isn’t it?

BUT, Do you need to create the SAME application for iOS and Android, with the SAME features twice? Ok, with screenlets it does not take too much time because most of the boring logic is encapsulated inside the screenlet and you only need to connect the dots. But it could be fantastic to have only one project and share the code of the two platforms.

How can we make this possible? Have you heard about React Native?


As you may know, React Native is a framework that allows you to create native applications (Android and iOS) in javascript using React. This avoids the necessity of having to maintain two different codebases, one per platform. It’s based on components, so the screenlets concept suits very well in React.

Long time ago, when ReactNative was released, we made a first proof of concept with some of the screenlets available at that moment. Now, we have came back to this idea and we have made another proof of concept. This one will feature all our brand new and more complex screenlets and, yes, Android is supported too.

With this prototype we aim to provide a solution to make mobile apps development faster (yeah, even more!) with React Native, so we could use the screenlets the same way you would use any react native component, like a Button component. Great! Do you want to see how it works? Take a look of the next video, it shows you how to use our library in React Native.

As you can see in the video, the use of screenlets from React Native are very easy. You only have to instantiate the screenlet that you want to use, give it a style with height and width because otherwise the screenlet will not show; and if you consider it appropriate handle the events that the screenlet will send.

To handle an event you have to specify a callback function that manage the mentioned event. E.g., in LoginScreenlet you can handle the event onLoginSuccess to handle when the user log in correctly.

Of course, the attributes (known as props in React) of the screenlets depends on the screenlet that will use, so some screenlets will have required attributes, e.g., the UserPortraitScreenlet needs the userId attribute.
To use all of this functionality in your react project, you have to configure your project following the steps of this video. Also, in the project’s README you can find a description of main steps to configure your react native project.

What is the status of the project?

For now this is a prototype. Even so,  ALL screenlets are available in React Native. In total, we have 21 screenlets in Android and 22 in iOS (the fileDisplayScreenlet is only available from iOS). To play with them, we recommend use the most common screenlets, like ImageGalleryScreenlet which show an image gallery, the UserPortraitScreenlet, the CommentListScreenlet to show a comments list of an asset and, of course, the LoginScreenlet, but you can use whatever you want.
So you can explore and tinker with them. Here you have the project.

How it works ?

We don’t want to bore you with technical details. Basically explained, we made a bridge, we built one side of the bridge in the native part, and the other side in the React Native part so it allows communication between them and render the screenlets.

What now?

Well, now it depends on you. You have the project to play with. We are open to suggestions and feedback. Honestly, we are very happy with the result for now.

Thanks for reading.


Luis Miguel Barco 2018-10-08T16:01:00Z
Categories: CMS, ECM

Listing out context variables 

Liferay - Mon, 10/08/2018 - 04:21
What's Context Contributor?

While developing a theme I was thinking how can I know all the variables that I can access in my free marker templates? Won't it be great if I can write a utility program which can list out all the variables and objects that can be accessed from our theme or other templates files? Context Contributor concept can be used for this purpose. Actually using Context Contributor we can write a piece of code to inject contextual information which can be reused in various template files. If you are not aware of context contributor please visit my article . 

How to create context contributor?

Using the Liferay IDE we can create the project structure. 


The Code: 

Our context contributor class has to implements TemplateContextContributor interface and we need to implement the below method. 

    public void prepare(Map<String, Object> contextObjects, HttpServletRequest request) {
    //The fun part is here 

If we see the above code, the first parameter contextObjectcts is the map which contains all the contextual information as key-value pairs. We can iterate all the items of the map and write it to a file. Here is the complete code of the method.  This method writes a file in my D drive with a file name all-variables.html.Of course, you can change it the way you want. 

    public void prepare(
        Map<String, Object> contextObjects, HttpServletRequest request) {
        PrintWriter writer;
        try {
            writer = new PrintWriter("D:\\all-variables.html", "UTF-8");
            StringBuffer stb = new StringBuffer();
            stb.append("<table border=\"1\">");
            stb.append("<th>Variable Name</th>");
            stb.append("<th>Variable Value</th>");

            for (Map.Entry<String, Object> entry : contextObjects.entrySet()) {
               // System.out.println(entry.getKey() +" = =  "+ entry.getValue());
        } catch (FileNotFoundException | UnsupportedEncodingException e) {
            // TODO Auto-generated catch block

You just deploy the module and access any pages. This code will be executed and your file is ready. Now you have all the contextual information which can be used the theme development as well as writing ADT. 

The output of the code:


Have fun... Hamidul Islam 2018-10-08T09:21:00Z
Categories: CMS, ECM

Why I'm Flying South to LSNA2018

Liferay - Sat, 10/06/2018 - 20:10
or, How to blow a Saturday evening writing a blog post just because  the kids don't want to hang out with you


Here are the 5 reasons I am flying down tomorrow evening.

I've been to LSNA twice before, in 2015 and 2016. I remember the energy and the ambience. Some of the topics deserve waaaay more than the 30 minutes or hour that is allocated to them, but then the presentations are designed to leave you with just enough to take a deep dive on whichever topic interests you, and on that front, they absolutely deliver. So, here's to more of that.

Unconference. I've never attended one of these before, but the prospect has me interested. I mean, it looks like the attendees get to carve out the day's agenda. I'll be bringing my list of topics fwiw. Something tells me there'll be enough knowledge sharing to go all around. 

Speed consulting. I hope I get to reserve a spot. I have a half-baked design approach around a SAML-authentication requirement using the SAML connector plugin - just a lot of holes that need plugging. Hoping a 20-minute session will help clear things up for me.

Agenda topics: As always, great spread! Here's the top 5 items on my radar at this time:

  • Search (Oct 9, 10:35)
  • Securing your APIs for Headless Use (Oct 10, 11:00)
  • Extending Liferay with OSGI (Oct 9, 4:30)
  • Liferay Analytics Cloud (Oct 10, 10:20)
  • Building Amazing Themes (Oct 9, 3:50)

Food. I have on my to-do list to eat a bowl of authentic étouffée. I will have to seek out the best place for this.

Javeed Chida 2018-10-07T01:10:00Z
Categories: CMS, ECM

Adding 2FA to Liferay DXP 7.1

Liferay - Wed, 10/03/2018 - 12:09

We recently had a requirement to add 2 Factor Authentication support for a demo, so I am pleased to share our implementation with the community.



On login the user sees a new 'Authenticator Code' field below Password:



The user populates their credentials, launches Google Authenticator app (or other 2FA app) on their phone and gets their code:



The user enters it on screen, clicks Sign In and hey presto, they have logged in with 2FA.


User setup

QR Codes are used to share the profile details with the end user:



These are shared with the end user by email, and for convenience  (e.g. for Demos & testing) the QR Code is available through the Liferay profile screens (on the Password tab):




To simplify rollout:

  • QR Codes used to configure the 2FA app. (Alternatively the user can manually configure the 2FA app.)
  • Users created after the full set of application modules are deployed will automatically be assigned a Secret Key on account creation and will be emailed a link to the QR Code.
  • There is an optional activator bundle that will generate Secret Keys and email QR Codes to all users.
  • Administrators can bypass 2FA and a custom User Group can be created to allow certain users to bypass 2FA if required.


Source & Documentation

The source is available here: including a readme with deployment steps and more information on configuration, limitations (e.g. storing Secret Keys in plain text) etc.

Michael Wall 2018-10-03T17:09:00Z
Categories: CMS, ECM

Upgrade WURFL's database into Liferay Mobile Device Detection Lite 

Liferay - Tue, 10/02/2018 - 10:02

If you're reading this post is because you need to know which device currently access on your Liferay through Liferay Mobile Device Detection Lite. Specially, you can not explain why Liferay detects a different version of your modern, cool and super updated device!

Don't worry! I try to explain you what to do.

WURFL's database

Before to explain,  do you know the WURLF's database? If you don't, you can see this shortly video!

In order to detect your device, you already know to download and install the Liferay Mobile Device Detection Lite from marketplace

This app contain a WURLF's database prepopulated inside the bundle through an external service called 51Degrees. This database is populated only during the build of bundle and not at runtime.

processResources { into("META-INF") { from { FileUtil.get(project, "") } } }

code from build.gradle of this app

The result was a 51Degrees.dat file inside the META-INF folder and, as you can image, this file is the engine of the device detection process.

Currently the last release (build) of Liferay Mobile Device Detection Lite was one year ago on marketplace and now the devices data are very old.

How to upgrade this WURFL's database

You can see, on the following image, the configuration of Liferay Mobile Device Detection Lite (51Degrees Device Detection) and how is linked the WURLF's database.

Unfortunately this configuration check on file system only inside your bundle and we can't link an URL or set an absolute path of this data file put in other places.

The only way to add or replace files inside existing bundle is a fragment. Now we are use this way in order to add a new WURFL's database.

You can check here my project on GitHub where I have put the file under META-INF folder and through the bnd file we explain to Liferay to "put" this file inside the original bundle.


At the end we can change the configuration and link the new WURLF's database and restart the server.


This database is not updated daily and you can check here the update status of this file. When you try to add a new file don't use the same filename but change it.

Davide Abbatiello 2018-10-02T15:02:00Z
Categories: CMS, ECM
Syndicate content