Assistance with Open Source adoption


jQuery in Liferay 7.0

Liferay - Fri, 09/14/2018 - 14:37

So those who know me or have worked with me know that I hate theming.

I do. I find it to be one of the hardest things in Liferay to get right.

I can build modules all day long, but ask me how to make the default buttons increase the height by a couple of pixels and change the color to orange, and I honestly have to start digging through doco and code to figure out how to do it.

Friends that I work with that focus on front end stuff? They run circles around me. Travis and Alex, you guys know I'm referring to you. AMD loader issues? I feel helpless and have to reach out to another friend, Chema, to set me straight.

So recently I'm trying to work with a client who is trying to get a silly jQuery mask plugin to work and they were struggling. They asked for my help.

Well, I got it working through a bunch of trial and error. What I was missing was kind of a straight-forward guide telling me how to build a theme that had jQuery in it (a full jQuery, not the minimal version Liferay needs for Bootstrap) and would allow me to pull in perhaps some jQuery UI, but at the very least I needed to get the Mask plugin to work.

Since I couldn't find that guide, well I just had to write a blog that could be that guide.

Note that I haven't shown this to Travis, Alex or even Chema. I honestly expect when they see what I've done here, they will just shake their heads, but hopefully they might point out my mistakes so I get all of the details right.

Creating The Theme

So I have to rely on the doco and tooling to help me with theme creation. Fortunately we all have access to because that's my go-to starting point.

I used the theme generator, version 7.2.0 (version 8.0.0 is currently beta as I write this, but I'm guessing it is really focused on Liferay 7.1). I pretty much used all of the defaults for the project, targeting Liferay 7.0 and using Styled as my starting point. This gave me a basic Gulp-based theme project which is as good a starting point as any.

I used the gulp build command to get the build directory that has all of the base files. I created the src/templates directory and copied build/templates/portal_normal.ftl over to the src/templates directory.

Adding jQuery

So there's two rules that I know about including jQuery in the theme thanks to my friend Chema:

  1. Load jQuery after Liferay/AUI has loaded the minimal jQuery.
  2. Use no conflict mode.

Additionally there is a best practice recommendation to use a namespace for your jQuery. This will help to ensure that you isolate your jQuery from anything else going on in the portal.

So I know the rules, but knowing the rules and implementing a solution can sometimes seem worlds apart.

In my src/templates/portal_normal.ftl file, I changed the <head /> section to be:

<head> <title>${the_title} - ${company_name}</title> <meta content="initial-scale=1.0, width=device-width" name="viewport" /> <@liferay_util["include"] page=top_head_include /> <script src=""></script> <script type="text/javascript"> // handle the noconflict designation, use namespace dnjq for DN's jQ. dnjq = jQuery.noConflict(true); </script> </head>

Okay, so since I'm doing this just before the end of the closing tag for <head />, I should be loading jQuery after Liferay/AUI has loaded its version. I'm also using no conflict mode, and finally I'm following the best practice and using my own namespace, dnjq.

I can use the gulp deploy command to now build my theme and deploy it to my locally running Liferay 7 instance (because I did the full configuration during project setup). In the console tailing Tomcat's catalina.out file, I can see that my theme is successfully deployed, processed and made available.

I can now create a new page and assign my theme to it. Now anyone who has done this much before, you already know that the page rendered using this simple theme is, well, pretty ugly. I mean, it's missing a lot of the normal sort of chrome I'd expect to see in a base theme including some initial positioning, margins, etc. I know, I know, Styled is meant for the experts like my friends Travis and Alex and any kind of initial defaults for those would just get in their way. For me, though, I'd be much better served if there were some kind of "Styled++" base theme that was somewhere between Styled and Classic (aka Clay Atlas), and honestly closer to the Classic side of the table. But we're not here to talk about that, so let's keep going.

So the page renders its ugly self but it looks like the same ugly self I've seen before, so nothing is broken. I can view source on the page and see that my changes to portal_normal.ftl were included, so that's good. I can even see that my namespace variable is there, so that's good too. So far this seems like a success.

Adding jQuery Mask

So my next step is to include the jQuery Mask Plugin.  This is actually pretty easy to do, I just add the following line after my <script /> tag that pulls in jquery-latest.js:

<script src=""></script>

I pulled the URL straight from Igor's site because his demo is working, so I should have no problems.

I use gulp deploy to rebuild the theme and send it to the bundle, the console shows it successfully deploys and my page with my custom theme still renders fine when I refresh the page.

I did see an error in the console:

Mismatched anonymous define() module: function(a){var l=function(b,e,f){...

But it is reportedly coming from everything.jsp (which I didn't touch). So I'm worried about the warning, yes, but still am feeling pretty good about my progress, so on we go.

Testing the Mask

To test, I just created a simple web content. I had to use the "code" mode to get to the HTML fragment view, then I used the following:

<div class="input-group"><label for="date">Date</label>&nbsp;<input class="dn-date" type="text" /></div> <script type="text/javascript"> dnjq(document).ready(function() { dnjq('.dn-date').mask('00/00/0000'); }); </script>

Nothing fancy here, just a test to verify that my theme was going to deliver the goods.

I save the web content then add it to my page and, well, fail.

In the console I can see:

Uncaught TypeError: dnjq(...).mask is not a function at HTMLDocument. (alt:453) at fire (jquery-latest.js:3119) at Object.fireWith [as resolveWith] (jquery-latest.js:3231) at Function.ready (jquery-latest.js:3443) at HTMLDocument.completed (jquery-latest.js:3474)

Nuts. I know this should work, it is working on Igor's page and I haven't really changed anything. So of course mask() is a function.

Diving Into The Source

So I have to solve this problem, so I approach it the same way any developer would, I go and check out Igor's code, fortunately he has shared the project on Github.

I don't have to go very far into the code before I realize what my problem is. Here's the relative fragment, I'll give you a second to look at it and guess where the problem is:

// UMD (Universal Module Definition) patterns for JavaScript modules that work everywhere. // (function (factory, jQuery, Zepto) { if (typeof define === 'function' && define.amd) { define(['jquery'], factory); } else if (typeof exports === 'object') { module.exports = factory(require('jquery')); } else { factory(jQuery || Zepto); } }(function ($) {...

I see this and I'm thinking that my problem lies with the AMD loader, or at least my lack of understanding how to get it to correctly deal with my script import. It has stepped in and smacked me around, leaving me standing there holding nothing but a bunch of broken javascript.

AMD Bypass

So "ha ha", I think, because I know how to bypass the AMD loader...

I download the jquery.mask.js file and save it in my theme as src/js/jquery.mask.js. I then change the stanza above to simplify it as:

// UMD (Universal Module Definition) patterns for JavaScript modules that work everywhere. // (function (factory, jQuery, Zepto) { factory(jQuery || Zepto); }(function ($) {...

Basically I just strip out everything that might be going to the AMD loader and just get the browser and jQuery to load the plugin.

Retesting the Mask

I change the portal_normal.ftl line for the mask plugin to be:

<script src="${javascript_folder}/jquery.mask.js"></script>

It will now pull from my theme rather than the web and will use my "fixed" version.

So I gulp deploy my theme, it builds, deploys and starts without any problems in the console, so far so good.

I refresh my browser page (I'm using incognito mode, so no cache to worry about). No errors in the console, still looking good.

I test enter some numbers in the displayed input field, and it all seems to work.

Wahoo! Success!


Well, kind of.

I mean, like I said, this is probably not the right way to do all of this. I'm sure my friends Travis, Alex and Chema will point out my mistakes, well after they're done laughing at me.

Until then, I can at least consider this issue kind of closed...

David H Nebinger 2018-09-14T19:37:00Z
Categories: CMS, ECM

Theming in Liferay 7.1

Liferay - Fri, 09/14/2018 - 10:47

In this article I’ll try to give you a comprehensive picture of the current state of theming in Liferay 7.1. To do this, I’ll describe the evolution of Liferay theming from the early Bootstrap use in 6.x themes to the introduction of Lexicon for 7.0, as well as identifying and addressing challenges that have been changed in 7.1. Also, I’ll add some practical cases that could help you when building themes and themelets depending on your choices.

The (ab)use of Bootstrap

Before getting into what is Lexicon, we need to talk about Bootstrap. Because it has been used in Liferay since 6.2 — and still used in a way but we’ll see that later — and as web developers we certainly used it at one time or another.

Bootstrap is a CSS framework: it gives you a set of CSS classes ready to use in order to build user interfaces.

Bootstrap is not a design language: it doesn’t provide a proper design system to improve your user experience through consistent interfaces.

And I’m sure that you already experienced the problem that comes with, because the unfortunate consequence can look like this:

Of course, this list is not exhaustive. Sadly, you can imagine a lot more combinations (e.g. with positioning).

It’s a common case to use Bootstrap as a CSS framework and a design language. Bootstrap design system is kind of implicit because its current rules are imprecise. For example:

“Bootstrap includes several predefined button styles, each serving its own semantic purpose, with a few extras thrown in for more control.”

Source: Bootstrap documentation

What does this really mean? In my previous use case of Save/Cancel, is it ok to use .btn-danger when it’s dangerous to cancel and .btn-default when it’s safe within the same application? As it is, we could say that all of our previous interfaces are following Bootstrap rules. But the user experience can become a nightmare.

So in order to create the best UX/UI design for your application, you need a proper design system you can rely on.

Understanding Lexicon Liferay’s own design system

The new design of Liferay 7.0 is not just a new classic theme with a Bootstrap upgrade. Design in Liferay is more important than ever. And based on what we mentioned previously, Liferay needed a proper design system. Consequently, Lexicon has been created.

Lexicon is not a CSS framework: “it is just a set of patterns, rules and behaviors.”

Lexicon is a design language: “a common framework for building interfaces within the Liferay product ecosystem.”

Source: Lexicon documentation

It’s the opposite definition we made about Bootstrap. But unlike it, you can’t be misled and use Lexicon as a CSS framework since it doesn’t provide any ready-to-use code.

To continue with our button example, here’s a sample of what you can find about it with Lexicon:

Source: Lexicon documentation

Lexicon helps designers

If Lexicon is not like Bootstrap, we can compare it to Google Material Design, Microsoft Metro or iOS Design. And just like them, Lexicon is for designers. In order to create the look and feel of an application, designers need two important documentations: the graphic charter — designers may have built themselves, but not necessarily — and the design language system.

Taking our previous example with buttons, the graphic charter would define what is the color and shape of our primary button in order to reflect the graphical identity of the company and/or the product. Whereas the design language would define in which cases to use primary buttons and how to use them in order to ensure an integration with consistency in its ecosystem.

Let’s consider this statement:

Following Material Design System to create the look and feel of an Android application guarantee that your application will integrate properly within Android ecosystem.

Now the same philosophy applies for Liferay:

Following Lexicon Design System to create the look and feel of a Liferay site guarantee that your site will integrate properly within Liferay ecosystem.

More components

The set of components — or pattern library — is defined by the design language system. Yet another part where Bootstrap is misleading in its role. But now with Lexicon, the number of components can be expanded as needed. And most importantly, the list of its components is not bounded to what Bootstrap provides, and thus to Bootstrap upgrades.

A concrete example is the timeline components available in Liferay 7.1 but missing in Bootstrap 4.

Implementing Lexicon Lexicon CSS becomes Clay

In 7.0, Liferay implemented Lexicon to apply its guidelines to a new look and feel and called it Lexicon CSS. But I guess the name added too much confusion. So now in Liferay 7.1, Lexicon’s implementation is Clay.

Where is Bootstrap?

Bootstrap is not gone with the arrival of Lexicon and Clay. Liferay 7.0 used Bootstrap 3 and now Liferay 7.1 uses Bootstrap 4, but to what extent? Where is Bootstrap?

Well, Bootstrap is used as a base framework in Clay, so Clay is an extension of Bootstrap. In other words, Clay provides existing CSS classes from Bootstrap as well as new CSS classes, all of them built to be Lexicon compliant.

For example, if you want do define an HTML element as a fluid container you can use .container-fluid (Bootstrap). But if you want a fluid container that doesn’t expand beyond a set of width you can also use .container-fluid-max-* classes (Clay).

What about icons?

Lexicon provides a set of icons. Clay implements it as a set of SVG icons. But you are free to use another icon library of your choice such as Font Awesome or Glyphicons.

What is Clay Atlas?

Clay Atlas is the default base theme in Liferay.

Because Lexicon is a design language, you could imagine building an equivalent of Clay, either from scratch or on top of your favorite CSS framework (e.g. Bootstrap, Foundation, Bulma) to integrate in another product than Liferay.

For example, we could imagine an implementation called Gneiss with two themes named Alps and Himalayas:



With this diagram, we highlight the role of each implementation and possibilities that come with:

  • Multiple Lexicon implementations

  • Multiple themes for Lexicon implementations

  • Multiple custom themes extending from a parent theme

In Liferay, you can either build a theme independent of Atlas, or based on Atlas.

Theme implementation concerns

This part will not cover all the steps required to create a theme because Liferay documentation does it properly. We will focus on the building process and some parts that need your attention.

Clay vs Bootstrap

In Liferay 6.2, Liferay used Bootstrap components, and so did you. So when it came to customization, you wanted to customize Bootstrap.

But as we saw in this article, Clay is an extension of Bootstrap, and Liferay is now using Clay components instead of Bootstrap components. So now, you want to use Clay components too and thus, you want to customize Clay instead of Bootstrap.

For example, alerts for error messages with Clay:

<div class="alert alert-danger" role="alert"> <span class="alert-indicator"> <svg aria-hidden="true" class="lexicon-icon lexicon-icon-exclamation-full"> <use xlink:href="${images_folder}/clay/icons.svg#exclamation-full"></use> </svg> </span> <strong class="lead">Error:</strong>This is an error message </div>

Instead of alerts with Bootstrap:

<div class="alert alert-danger" role="alert"> Error: This is an error message </div>

Check out available components on Clay’s site.

Customizing Clay

Even if — for some reason — you still want to use Bootstrap components, you need to customize Clay components because Liferay is using them and consequently your users will experience them. If you’re only customizing Bootstrap components, Clay components would have only a part of the customization and the user experience would be inconsistent (e.g. successful alerts after a save/submission).

Customizing Clay is the same process as Bootstrap: you want to work with SCSS files in order to override variables. You can find these variables in the official GitHub repository.

Do you remember when you could customize Bootstrap online and download the result?


For each variable in Bootstrap, you had a corresponding input. So customization was quick and handy.

Guess what? Now there’s an awesome tool like that for Clay called Clay Paver (for Liferay 7.0, but soon for 7.1).



IMHO, it’s actually better because you can preview the result online while you’re editing.

It’s open source on GitHub and you can run it locally. So if you like it, please star it to support its author Patrick Yeo for providing such a great tool to the community.

Integrating a Bootstrap 4 theme

In this case, you want to use an existing Bootstrap 4 theme and build a Liferay theme from it. You can find examples here where I built Liferay 7.1 themes from Bootstrap 4 themes using Liferay documentation.

However, each Bootstrap theme can be built differently so you run into problems when you want to integrate it in a Liferay theme. So, we’re going to take a closer look at some of the potential pitfalls.

Use SCSS files

You need to use the uncompiled version of the Bootstrap theme, i.e. multiple SCSS files.

These SCSS files are included in a subfolder to ${theme_root_folder}/src/css as mentioned in the documentation.

Don’t use a compiled CSS file (e.g. my-bootstrap-theme.css or my-bootstrap-theme.min.css). If you’re working from a Bootstrap theme that doesn’t include SCSS files, you need to ask for them because these files should be provided.

Verify variables

In some cases, a Bootstrap theme is using custom variables. For example, instead of $primary it could be something like $custom-color-primary, and thus $primary is not overridden when integrating your Bootstrap theme in your Liferay theme.

The less painful way to resolve this is to map custom variables with Bootstrap variables in a dedicated file (e.g. _map_variables.scss):

$primary: $custom-color-primary;

This dedicated file is then imported into _clay_variables.scss.

Choose your icon provider

In your Bootstrap 4 theme, there’s a great chance that it doesn’t use Font Awesome 3 which is include in Liferay. In that case, you can:

  • Add Font Awesome (4+) in your theme

  • Downgrade Font Awesome icons (not recommended, can be problematic for theme consistency and if icons don’t exist in Font Awesome 3)

  • Migrate to Lexicon icons


The new UI experience in Liferay 7 is not a simple “modern, refreshed look”. As a developer, we might have felt like it was, considering that Lexicon & Clay were new things introduced by Liferay that didn’t require our attention because Bootstrap’s still here. But there’s so much work and thinking behind website design with Lexicon and Clay that understanding it becomes the key to building and extending Liferay themes. And I hope this article helped you with that.


Resources Documentations



Liferay 7.0 on Themes

Liferay 7.1 on Themes


Introducing Lexicon: The Liferay Experience System

New Improvements in Lexicon: Features for Admin and Sites


Lexicon update in Liferay 7.1 from 7.0

Designing animations for a multicultural product

How Good Design Enhances Utility


Clay Paver

Liferay on Dribbble

Liferay Design Site

Louis-Guillaume Durand 2018-09-14T15:47:00Z
Categories: CMS, ECM

Blade Extensions and Profiles

Liferay - Thu, 09/13/2018 - 06:53

Hello all!


We want our development tools to be flexible and extensible enough to meet our users requirements, and with this in mind, we have developed 2 new concepts for Blade: Extensions and Profiles.

Extensions allow you to develop your own custom commands for Blade, in which Blade will act as a framework and invoke your command when it is specified on the CLI. You, as the developer of the custom command, may choose the command name and help text, and implement it to meet your requirements. We have also included the ability to install and manage custom extensions directly from the Blade CLI, in the form of "blade extension install" and "blade extension uninstall".

Building upon Extensions, we have also created Profiles, which is basically a metadata flag associated with a given Liferay Workspace. When a custom command is created for Blade, it may be associated with a specific profile type (by using the annotation @BladeProfile), and after this custom command is installed for a user, it will be available in any Liferay Workspace associated with that profile. 

To create a workspace associated with a particular profile, Blade may be invoked as "blade init -b foo", foo would be the profile. (We may change this flag to -p / --profile, or add support for both, what do you think?)

If you would like to get started with these features, please look here for implementation examples (more to come).

Please let us know what you think of these features, if there is anything you would like to see added or changed in their implementation, or if you have comments about Blade in general. Your feedback helps us refine and improve the development experience, and it is very much appreciated!

Thank you,

Chris Boyd

Christopher Boyd 2018-09-13T11:53:00Z
Categories: CMS, ECM

Notice: Repository CDN URL Change

Liferay - Wed, 09/12/2018 - 07:53

Just a quick blog in case it hasn't come up before...

Liferay was using a CDN for offloading traffic for the repository artifacts. You've likely seen the URL during builds or within your build scripts/configuration.

The old one is of the form:

Recently though Liferay switched to a new CDN provider and are using a newer URL. You might have seen this if you have upgraded your blade or have started working with 7.1.

The new one is of the form:

If you're using the old one, I would urge you to change to the new version. I haven't heard about if or when the old one will be retired, but you don't want to find out because your build server starts kicking out error emails at two in the morning.

  • If you are using Ant/Ivy, check the ivy-settings.xml for this URL and change it there.
  • If you are using Maven, check your poms and your master settings.xml in your home directory's hidden .m2 folder.
  • If you are using Gradle, check your settings.gradle file, your build.gradle files and possibly your file.

While the transition was in process, a number of times I and others recommended just taking out the portion of the URL and go straight to It is the non-CDN version. You too should change your URLs to also. Liferay is planning at some point to blocking public connections to the repository (it slows their internal build processes with so many users hitting the repository directly) although I have no idea when or if this will happen.

But again, you don't want to find out it happened when your build server starts failing at 2am...

David H Nebinger 2018-09-12T12:53:00Z
Categories: CMS, ECM

Devcon 2018

Liferay - Wed, 09/12/2018 - 03:25

Did you already book your ticket for Devcon 2018? Early Bird ends in a few hours (14 Sep 2018) and I hear that the Unconference is solidly booked (not yet sold out, but on a good path to be sold out very soon).

If you have or have not been at a past Devcon, but need more reasons to come again: The agenda is now online, together with a lot of information and quotes from past attendees on the main website. You'll have a chance to meet quite a number of Liferay enthusiasts, consultants, developers and advocates from Liferay's community. Rumors (substantiated in the agenda) are that David Nebinger will share his experience on Upgrading to Liferay 7.1, and is able to do so in 30 minutes. And if you've ever read his blogs or the forum posts, you know that he's covering a lot of ground and loves to share his knowledge and experience. And I see numerous other topics, from the Developer Story to React, from OSGi to Commerce and AI.

Any member of Liferay's team that you see at Devcon is there for one reason: To talk to you. If you've ever wondered about a question, wanted to lobby for putting something on the roadmap, or just need a reason for a certain design decision: That's the place where you can find someone responsible, on any level of Liferay's architecture or product guidance. Just look at the names on the agenda, and expect a lot more of Liferay staff to be on site in addition to those named. And, of course, a number of curious and helpful community members as well.

And if you still need yet another reason: Liferay Devcon is consistently the conference with - by far - the best coffee you can get at a conference. The bar is high, and we're aiming at surpassing it again with the help of Thomas Schweiger.

(If you're more interested in business- than this nerd-stuff, we have events like LDSF in other places in Europe. If Europe is too far, consider NAS or another event close to you. But if this nerdy stuff is for you, you should really consider to come)

(Did I mention that the Unconference will sell out? Don't come crying to me if you don't get a ticket because you were too late. You have been warned.)


(Images: article's title photo: CC-by-2.0 Jeremy Keith,  AMS silhouette from Devcon site)

Olaf Kock 2018-09-12T08:25:00Z
Categories: CMS, ECM

Building charts for multiple products

Liferay - Tue, 09/11/2018 - 03:28

With Liferay releasing new products such as Analytics Cloud and Commerce we decided to cover the need for charts by providing an open source library.


The technology

Clay, our main implementation of Lexicon, created Clay charts. These charts are built on top of Billboard.js where many contributions have been done by Julien Castelain and other Liferay developers. Billboard.js is an adaptation layer on top of D3.js, probably the most known and used for data visualization these days.


The issue

Although Billboard.js is a very good framework, it was not covering all our needs in terms of interaction design and visual design. Therefore, we have been working on top of it, contributing some work and keeping the rest of it inside Lexicon and Clay.

Improving accessibility

Improving the accessibility aspect for different charts was one of our first contributions to Billboard.js. We provided 3 different possible properties that help to differentiate the data before having to include colors.


  • 9 different dashed strokes styles for the line charts that helps to follow the shape of each line.   

  • 9 different shapes to use as dots inside the line charts and the legend that helps to read the points in each line.

  • 9 different background patterns to be used on shaped charts like the doughnut chart or the bar chart, adding this property to a chart background helps to recognise the different shapes even if the colors are similar.

Here is a clear example so you can see how the user would perceive, read and follow the different data from the line chart, without the direct use of colors.

Creating a color palette

Color is one of the first properties that users would perceive along shapes and lines, making it our next priority. We needed a flexible color palette that allowed us to represent different types of data.

This set is composed by 9 different and ordered colors that are meant to be used in shaped charts as background or in line charts as borders.

Each of these colors can be divided into 9 different shades using a Sass generator. It is useful to generate a gradient chart to cover all the standard situations for charts. 
Here’s an example using the color blue:

Ideally, to take advantage of these colors use the charts over a white background.

Warning: using these colors for texts won’t reach the minimum contrast ratio requested by W3C. Using a legend, tooltips and popovers to provide text information is the best course of action.

  Introducing a base interaction layer

The idea behind the design of these interactions is to provide a consistent and simple base for all charts. This increases predictability and reduces the learning curve.

These interactions are based on the main events (click/hover/tap) applied to the common elements in our charts: axis, legend, isolated elements or grouped elements.
We also reinforce the visualization, with highlights, between related information and extended information, displayed through popovers.

As you can see in the example below, the Stacked bar and the Heatmap share the same interaction of the mouse hover to create a highlight on the selected data. This is done without any change to the main color when focusing on an element, but instead decreasing the opacity of the other elements.

In addition to this, each designer can extend these interactions depending on their products as well as working on advanced interactions. So, if they need specific actions such as a different result on hover, data filters, or data settings they can add them to the chart as a customization.


Working with D3.js allowed us to focus on our project details such as accessibility, colors and interaction, adding quality to the first version of Charts and meeting the deadline at the same time.

Thanks to the collaboration with Billboard.js we were able to help another open source project and as a result, share our work with the world.

You can find more information about Clay, Charts and other components directly inside the  Lexicon Site.

Emiliano Cicero 2018-09-11T08:28:00Z
Categories: CMS, ECM

How Page Fragments help the new web experience in Liferay Portal 7.1

Liferay - Mon, 09/10/2018 - 03:50

This is the second post explaining the new Web Experience functionalities released in version 7.1 of Liferay Portal.  As presented in the previous post, in order to empower business users, it is necessary to have Page Fragment collections available.


But, what are they and what is their use?


Page Fragments are “reusable page parts” created by web developers to be used by non-technical users. Page Fragments are designed and conceived to be assembled by marketers or business users to build Content Pages. To start creating Page Fragments, we will be required to add a Page Fragment collection (we will look at the tools available in a moment), but first...


How are Page Fragments developed?


Page Fragments are implemented by web developers using  HTML, CSS and JavaScript. The markup of the Page Fragment is in regular HTML, but it can contain some special tags. Effectively, two main types of lfr tags add functionality to Page Fragments:


  • lfr-editable” is used to make a fragment section editable. This can be extensive to plain “text”, “image” or “rich-text”, depending on which of the 3 “type” options is used. Rich text provides an WYSIWYG editor for editing before publication.


<lfr-editable id=”unique-id” type="text"> This is editable text! </lfr-editable> <lfr-editable id="unique-id" type="image"> <img src="..."> </lfr-editable> <lfr-editable id=”unique-id” type="rich-text"> <h1>This is editable rich text!</h1> <p>It may contain almost any HTML elements</p> </lfr-editable>


  • “lfr-widget-<>” are a group of tags used to embed widgets within Page Fragments and, therefore, to add dynamic behaviour. The corresponding widget name should be added in <>. For instance, “nav” for Navigation Menu widget, “web-content” for Web Content Display or “form” for Forms widget.


<div class=”nav-widget”> <lfr-widget-nav> </lfr-widget-nav> </div>


If you would like to get more detail on how to build Page Fragments, you can check  the Liferay 7.1 Documentation section on Developing Page Fragments.


What tools are there for the creation and management of Page Fragments?


Page Fragment collections are stored and organized from a new administration application.  There we will find access to the Page Fragment editor, where front-end developers will be able to create new collections or edit existing ones. Importing/ exporting collections is also an option.



Interesting additional functionality is the “View Usages” option found in the kebab menu of the Page Fragments in use. It allows to track the usage of a given Page Fragment as in where and when it has been used, namely, in which Pages, Page Templates and Display Pages it has been used and which version each of them is using. Page Fragments are designed to allow business users to do inline editing, which means web developers are not expected make changes to the actual content, but some other edits are likely to be necessary-  adjust size of image, position of text…-. To provide support to these scenarios, by default, changes are not propagated to Page Fragments in use, but the web developer is given a tool for bulk change propagation that will apply to selected usages within the list.


To finalize, remind you that should you be interested in leveraging on an example, Themes with an existing set of Page Fragments are downloadable for free from the Marketplace. Both Fjord and Westeros Bank have now been updated for 7.1


Also, remember that you can check the “Building Engaging Websites” free course available on Liferay University.

Ianire Cobeaga 2018-09-10T08:50:00Z
Categories: CMS, ECM

Extending OSGi Components

Liferay - Fri, 09/07/2018 - 20:46

A few months ago, in the Community Chat, one of our community members raised the question, "Why does Liferay prefer public pages over private pages?" For example, if you select the "Go to Site" option, if there are both private pages and public pages, Liferay sends you to the public pages.

Unfortunately, I don't have an answer to that question. However, through some experimentation, I am able to help answer a closely related question: "Is it possible to get Liferay to prefer private pages over public pages?"

Find an Extension Point

Before you can actually do any customization, you need to find out what is responsible for the behavior that you are seeing. Once you find that, it'll then become apparent what extension points are available to bring about the change you want to see in the product.

So to start off, you need to determine what is responsible for generating the URL for the "Go to Site" option.

Usually, the first thing to do is to run a search against your favorite search engine to see if anyone has explained how it works, or at least tried a similar customization before. If you're extremely fortunate, you'll find a proof of concept, or at least a vague outline of what you need to do, which will cut down on the amount of time it takes for you to implement your customization. If you're very fortunate, you'll find someone talking about how the overall design of the feature, which will give you some hints on where you should look.

Sadly, in most cases, your search will probably come up empty. That's what would have happened in this case.

If you have no idea what to do, the next thing you should try is to ask someone if they have any ideas on how to implement your customization. For example, you might post on the Liferay community forums or you might try asking a question on Stackoverflow. If you have a Liferay expert nearby, you can ask them to see if they have any ideas on where to look.

If there are no Liferay experts, or if you believe yourself to be one, the next step is to go ahead and find it yourself. To do so, you will need a copy of the Liferay Portal source (shallow clones are also adequate for this purpose, which is beneficial because a full clone of Liferay's repository is on the order of 10 GB now), and then search for what's responsible for the behavior that you are seeing within that source code.

Find the Language Key

Since what we're changing has a GUI element with text, it necessarily has a language key responsible for displaying that text. This means that if you search through all the files in Liferay, you should be able to find something that reads, "Go to Site". If you're using a different language, you'll need to search through files instead.

git ls-files | grep -F | xargs grep -F "Go to Site"

In this case, searching for "Go to Site" will lead you to the module's language files, which tell us that the language key that reads "Go to Site" corresponds to the key go-to-site.

Find the Frontend Code

It's usually safe to assume that the language file where the key is declared is also the module that uses it, which means we can restrict our search to just the module where the lives.

git ls-files modules/apps/web-experience/product-navigation/product-navigation-site-administration | \ xargs grep -Fl go-to-site

This will give us exactly one result.

Replace the Original JSP

If you were to choose to modify this JSP, a natural approach would be to follow the tutorial on JSP Overrides Using OSGi Fragments, and then call it a day.

With that in mind, a simple way to get the behavior we want is to let Liferay generate the URL, and then do a straight string replacement changing /web to /group (or /user if it's a user personal site) if we know that the site has private pages.

<%@page import="com.liferay.portal.kernel.util.StringUtil" %> <%@page import="com.liferay.portal.util.PropsValues" %> <% Group goToSiteGroup = siteAdministrationPanelCategoryDisplayContext.getGroup(); String goToSiteURL = siteAdministrationPanelCategoryDisplayContext.getGroupURL(); if (goToSiteGroup.getPrivateLayoutsPageCount() > 0) { goToSiteURL = StringUtil.replaceFirst( goToSiteURL, PropsValues.LAYOUT_FRIENDLY_URL_PUBLIC_SERVLET_MAPPING, goToSiteGroup.isUser() ? PropsValues.LAYOUT_FRIENDLY_URL_PRIVATE_USER_SERVLET_MAPPING : PropsValues.LAYOUT_FRIENDLY_URL_PRIVATE_GROUP_SERVLET_MAPPING); } %> <aui:a cssClass="goto-link list-group-heading" href="<%= goToSiteURL %>" label="go-to-site" /> Copy More Original Code

Now, let's imagine that we want to also worry about the go-to-other-site site selector and update it to provide the URLs we wanted. Investigating the site selector investigation would lead you to item selectors, which would lead you to the MySitesItemSelectorView and RecentSitesItemSelectorView, which would take you to view_sites.jsp.

We can see that there are three instances where it generates the URL by calling GroupURLProvider.getGroup directly: line 83, line 111, and line 207. We would simply follow the same pattern as before in each of these three instances, and we'd be finished with our customization.

If additional JSP changes would be needed, this process of adding JSP overrides and replacement bundles would continue.

Extend the Original Component

While we were lucky in this case and found that we could fix everything just by modifying just two JSPs, we won't always be this lucky. Therefore, let's take the opportunity to understand if there's a different way to solve the problem.

Following the path down to all the different things that this JSP calls leads us to a few options for which extension point we can potentially override in order to get the behavior we want.

First, we'll want to ask: is the package the class lives in exported? If it is not, we'll need to either rebuild the manifest of the original module to provide the package in Export-Package, or we will need to add our classes directly to an updated version of the original module. The latter is far less complicated, but the module would live in osgi/marketplace/override, which is not monitored for changes by default (see the portal property).

From there, you'll want to ask the question: what kind of Java class is it? In particular, you would ask dependency management questions. Is an instance of it managed via Spring? Is it retrieved via a static method? Is there a factory we can replace? Is it directly instantiated each time it's needed?

Once you know how it's instantiated, the next question is how you can change its value where it's used. If there's a dependency management framework involved, we make the framework aware of our new class. For a direct instantiation, then depending on the Liferay team that maintains the component, you might see these types of variables injected as request attributes (which you would handle by injecting your own class for the request attribute), or you might see this instantiated directly in the JSP.

Extend a Component in a Public Package

Let's start with SiteAdministrationPanelCategoryDisplayContext. Digging around in the JSP, you discover that, unfortunately, it's just a regular Java object and the constructor is called in site_administration_body.jsp. Since this is just a plain old Java object that we instantiate from the JSP (it's not a managed dependency), which makes it a bad choice for an extension point unless you want to replace the class definition.

What about GroupURLProvider? Well, it turns out that GroupURLProvider is an OSGi component, which means its lifecycle is managed by OSGi. This means that we need to make OSGi aware of our customization, and then replace the existing component with our component, which will provide a different implementation of the getGroupURL method which prefers private URLs over public URLs.

From a "can I extend this in a different bundle, or will I need to replace the existing bundle" perspective, we're fortunate (the class is inside of a -api module, where everything is exported), and we can simply extend and override the class from a different bundle. The steps are otherwise identical, but it's nice knowing that you're modifying a known extension point.

Next, we declare our extension as an OSGi component.

@Component( immediate = true, service = GroupURLProvider.class ) public class PreferPrivatePagesGroupURLProvider extends GroupURLProvider { }

If you're the type of person who likes to sanity check after each small change by deploying your update, there's a wrinkle you will run into right here.

If you deploy this component and then blacklist the original component by following the instructions on Blacklisting OSGi Modules and Components, you'll run into a NullPointerException. This is because OSGi doesn't fulfill any of the references on the parent class, so when it calls methods on the original GroupURLProvider, none of the references that the code thought would be satisfied actually are satisfied, and it just calls methods on null objects.

You can address part of the problem by using bnd to analyze the parent class for protected methods and protected fields by adding -dsannotations-options: inherit.

-dsannotations-options: inherit

Of course, setting field values using methods is uncommon, and the things Liferay likes to attach @Reference to are generally private variables, and so almost everything will still be null even after this is added. In order to work around that limitation, you'll need to use Java reflection. In a bit of an ironic twist, the convenience utility for replacing the private fields via reflection will also be a private method.

@Reference(unbind = "unsetHttp") protected void setHttp(Http http) throws Exception { _setSuperClassField("_http", http); } @Reference(unbind = "unsetPortal") protected void setPortal(Portal portal) throws Exception { _setSuperClassField("_portal", portal); } protected void unsetHttp(Http http) throws Exception { _setSuperClassField("_http", null); } protected void unsetPortal(Portal portal) throws Exception { _setSuperClassField("_portal", null); } private void _setSuperClassField(String name, Object value) throws Exception { Field field = ReflectionUtil.getDeclaredField( GroupURLProvider.class, name); field.set(this, value); } Implement the New Business Logic

Now that we've extended the logic, what we'll want to do is steal all of the logic from the original method (GroupURLProvider.getGroupURL), and then flip the order on public pages and private pages, so that private pages are checked first.

@Override protected String getGroupURL( Group group, PortletRequest portletRequest, boolean includeStagingGroup) { ThemeDisplay themeDisplay = (ThemeDisplay)portletRequest.getAttribute( WebKeys.THEME_DISPLAY); // Customization START // Usually Liferay passes false and then true. We'll change that to // instead pass true and then false, which will result in the Go to Site // preferring private pages over public pages whenever both are present. String groupDisplayURL = group.getDisplayURL(themeDisplay, true); if (Validator.isNotNull(groupDisplayURL)) { return _http.removeParameter(groupDisplayURL, "p_p_id"); } groupDisplayURL = group.getDisplayURL(themeDisplay, false); if (Validator.isNotNull(groupDisplayURL)) { return _http.removeParameter(groupDisplayURL, "p_p_id"); } // Customization END if (includeStagingGroup && group.hasStagingGroup()) { try { if (GroupPermissionUtil.contains( themeDisplay.getPermissionChecker(), group, ActionKeys.VIEW_STAGING)) { return getGroupURL(group.getStagingGroup(), portletRequest); } } catch (PortalException pe) { _log.error( "Unable to check permission on group " + group.getGroupId(), pe); } } return getGroupAdministrationURL(group, portletRequest); }

Notice that in copying the original logic, we need a reference to _http. We can either replace that with HttpUtil, or we can store our own private copy of _http. So that the code looks as close to the original as possible, we'll store our own private copy of _http.

@Reference(unbind = "unsetHttp") protected void setHttp(Http http) throws Exception { _http = http; _setSuperClassField("_http", http); } protected void unsetHttp(Http http) throws Exception { _http = null; _setSuperClassField("_http", null); } Manage a Component's Lifecycle

At this point, all we have to do is disable the old component, which we can do by following the instructions on Blacklisting OSGi Modules and Components.

However, what if we wanted to do that at the code level rather than at the configuration level? Maybe we want to simply deploy our bundle and have everything just work without requiring any manual setup.

Disable the Original Component on Activate

First, you need to know that the component exists. If you attempt to disable the component before it exists, that's not going to do anything for you. We know it exists, once a @Reference is satisfied. However, because we're going to disable it immediately upon realizing it exists, we want to make it optional. This leads us to the following rough outline, where we call _deactivateExistingComponent once we have our reference satisfied.

@Reference( cardinality = ReferenceCardinality.OPTIONAL, policy = ReferencePolicy.DYNAMIC, policyOption = ReferencePolicyOption.GREEDY, target = "(", unbind = "unsetGroupURLProvider" ) protected void setGroupURLProvider(GroupURLProvider groupURLProvider) throws Exception { _deactivateExistingComponent(); } protected void unsetGroupURLProvider(GroupURLProvider groupURLProvider) { }

Next, you need to be able to access a ServiceComponentRuntime, which provides a disableComponent method. We can get access to this with another @Reference. If we exported this package, we would probably want this to be set using a method for reasons we experienced earlier that required us to implement our _setSuperClassField method, but for now, we'll be content with leaving it as private.

@Reference private ServiceComponentRuntime _serviceComponentRuntime;

Finally, in order to call ServiceComponentRuntime.disableComponent, you need to generate a ComponentDescriptionDTO, which coincidentally needs just a name and the bundle that holds the component. In order to get the Bundle, you need to have the BundleContext.

@Activate public void activate( ComponentContext componentContext, BundleContext bundleContext, Map<String, Object> config) throws Exception { _bundleContext = bundleContext; _deactivateExistingComponent(); } private void _deactivateExistingComponent() throws Exception { if (_bundleContext == null) { return; } String componentName = GroupURLProvider.class.getName(); Collection<ServiceReference<GroupURLProvider>> serviceReferences = _bundleContext.getServiceReferences( GroupURLProvider.class, "(" + componentName + ")"); for (ServiceReference serviceReference : serviceReferences) { Bundle bundle = serviceReference.getBundle(); ComponentDescriptionDTO description = _serviceComponentRuntime.getComponentDescriptionDTO( bundle, componentName); _serviceComponentRuntime.disableComponent(description); } } Enable the Original Component on Deactivate

If we want to be a good OSGi citizen, we also want to make sure that the original component is still available whenever we stop or undeploy our module. This is really the same thing in reverse.

@Deactivate public void deactivate() throws Exception { _activateExistingComponent(); _bundleContext = null; } private void _activateExistingComponent() throws Exception { if (_bundleContext == null) { return; } String componentName = GroupURLProvider.class.getName(); Collection<ServiceReference<GroupURLProvider>> serviceReferences = _bundleContext.getServiceReferences( GroupURLProvider.class, "(" + componentName + ")"); for (ServiceReference serviceReference : serviceReferences) { Bundle bundle = serviceReference.getBundle(); ComponentDescriptionDTO description = _serviceComponentRuntime.getComponentDescriptionDTO( bundle, componentName); _serviceComponentRuntime.enableComponent(description); } }

Once we deploy our change, we find that a few other components in Liferay are using the GroupURLProvider provided by OSGi. Among these is the go-to-other-site site selector, which would have required another bundle replacement with the previous approach.

Minhchau Dang 2018-09-08T01:46:00Z
Categories: CMS, ECM

Gradle: compile vs compileOnly vs compileInclude

Liferay - Fri, 09/07/2018 - 12:55

By request, a blog to explain compile vs compileOnly vs compileInclude...

First it is important to understand that these are actually names for various configurations used during the build process, but specifically  when it comes to the dependency management. In Maven, these are implemented as scopes.

Each time one of these three types are listed in your dependencies {} section, you are adding the identified dependency to the configuration.

The three configurations all add a dependency to the compile phase of the build, when your java code is being compiled into bytecode.

The real difference these configurations have is on the manifest of the jar when it is built.


So compile is the one that is easiest to understand. You are declaring a dependency that your java code needs to compile cleanly.

As a best practice, you only want to list compile dependencies on those libraries that you really need to compile your code. For example, you might be needing group: 'org.apache.poi', name: 'poi-ooxml', version: '4.0.0' for reading and writing Excel spreadsheets, but you wouldn't want to spin out to and then declare a compile dependency on everything POI needs.  As transitive dependencies, Gradle will handle those for you.

When the compile occurs, this dependency will be included in the classpath for javac to compile your java source file.

Additionally, when it comes time to build the jar, packages that you use in your java code from POI will be added as Import-Package manifest entries.

It is this addition which will result in the "Unresolved Reference" error about the missing package if it is not available from some other module in the OSGi container.

For those with a Maven background, the compile configuration is the same as Maven's compile scope.


The compileOnly configuration is used to itemize a dependency that you need to compile your code, same as compile above.

The difference is that packages your java code use from a compileOnly dependency will not be listed as Import-Package manifest entries.

The common example for using compileOnly typically resolves around use of annotations. I like to use FindBugs on my code (don't laugh, it has saved my bacon a few times and I feel I deliver better code when I follow its suggestions). Sometimes, however, FindBugs gets a false positive result, something it thinks is a bug but I know it is exactly how I need it to be.

So the normal solution here is to add the @SuppressFBWarninsg annotation on the method; here's one I used recently:

@SuppressFBWarnings(value = "NP_NULL_PARAM_DEREF", justification = "Code allocates always before call.") public void emit(K key) { ... }

FindBugs was complaining that I didn't check key for null, but it is actually emitting within the processing of a Map entry, so the key can never be null. Rather than add the null check, I added the annotation.

To use the annotation, I of course need to include the dependency:

compileOnly ''

I used compileOnly in this case because I only need the annotation for the compile itself; the compile will strip out the annotation info from the bytecode because it is not a runtime annotation, so I do not need this dependency after the compile is done.

And I definitely don't want it showing up in the Import-Package manifest entry.

In OSGi, we will also tend to use compileOnly for the org.osgi.core and osgi.cmpn dependencies, not because we don't need them at runtime, but because we know that within an OSGi container these packages will always be provided (so the Manifest does not need to enforce it) plus we might want to use our jar outside of an OSGi container.

For those with a Maven background, the compileOnly configuration is similar to Maven's provided scope.


compileInclude is the last configuration to cover. Like compile and compileOnly,  this configuration will include the dependency in the compile classpath.

The compileInclude configuration was actually introduced by Liferay and is included in Liferay's Gradle plugins.

The compileInclude configuration replaces the manual steps from my OSGi Depencencies blog post, option #4, including the jars in the bundle.

In fact, everything my blog talks about with adding the Bundle-ClassPath directive and the -includeresource instruction to the bnd.bnd file, well the compileInclude does this. Where compileInclude shines, though, is that it will also include some of the transitive dependencies into the module as well.

Note how I said that some of the transitive dependencies are included? I haven't quite figured out how it decides which transitive dependencies to include, but I do know it is not always 100% correct. I've had cases where it missed a particular transitive dependency. I do know it will not include optional dependencies and that may have been the cause in those cases. To fix it though, I would just add a compileInclude configuration line for the missing transitive dependency.

You can disable the transitive dependency inclusion by adding a flag at the end of the declaration. For example, if I only wanted poi-ooxml but for some reason didn't want it's transitive dependencies, I could use the following:

compileInclude group: 'org.apache.poi', name: 'poi-ooxml', version: '4.0.0', transitive:false

It's then up to you to include or exclude the transitive dependencies, but at least you won't need to manually update the bnd.bnd file.

If you're getting the impression that compileInclude will mostly work but may make some bad choices (including some you don't want and excluding some that you need), you would be correct. It will never offer you the type of precise control you can have by using Bundle-ClassPath and -includeresource. It just happens to be a lot less work.

For those who use Maven, I'm sorry but you're kind of out of luck as there is no corresponding Maven scope for this one.


I hope this clarifies whatever confusion you might have with these three configurations.

If you need recommendations of what configuration to use when, I guess I would offer the following:

  • For packages that you know will be available from the OSGi container, use the compile configuration. This includes your custom code from other modules. all com.liferay code, etc.
  • For packages that you do not want from the OSGi container or don't think will be provided by the OSGi container, use the compileInclude configuration. This is basically all of those third party libraries that you won't be pushing as modules to the container.
  • For all others, use the compileOnly configuration.


David H Nebinger 2018-09-07T17:55:00Z
Categories: CMS, ECM

Can We Get a Little Help Over Here?

Liferay - Thu, 09/06/2018 - 13:03

One of the benefits that you get from an enterprise-class JEE application server is a centralized administration console.

Rather than needing to manage nodes individually like you would with Tomcat, the JEE admin console can work on the whole cluster at one time.

But, with Liferay 7 CE and Liferay 7 DXP and the deployment of OSGi bundle jars, portlet/theme wars and Liferay lpkg files, the JEE admin console cannot be used to push your shiny new module or even a theme war file because it won't know to drop these files in the Liferay deploy folder.

Enter the Deployment Helper

So Liferay created this Maven and Gradle plugin called the Deployment Helper to give you a hand here.

Using the Deployment Helper, the last part of your build is the generation of a war file that contains the bundle jars and theme wars, but is a single war artifact.

This artifact can be deployed to all cluster nodes using the centralized admin console.

Adding the Deployment Helper to the Build

To add the Deployment Helper to your Gradle-based build:

To add the Deployment Helper to your Maven-based build:

While both pages offer the technical details, they are still awfully terse when it comes to usage.

Gradle Deployment Helper

Basically for Gradle you get a new task, the buildDeploymentHelper task. You can execute gradlew buildDeploymentHelper on the command line after including the plugin and you'll get a war file, but probably one that you'll want to configure.

The plugin is supposed to pull in all jar files for you, so that will cover all of your modules. You'll want to update the deploymentFiles to include your theme wars and any of the artifacts you might be pulling in from the legacy SDK.

In the example below, my Liferay Gradle Workspace has the following build.gradle file:

buildscript { dependencies { classpath group: "com.liferay", name: "com.liferay.gradle.plugins", version: "3.12.48" classpath group: "com.liferay", name: "com.liferay.gradle.plugins.deployment.helper", version: "1.0.3" } repositories { maven { url "" } } } apply plugin: "com.liferay.deployment.helper" buildDeploymentHelper { deploymentFiles = fileTree('modules'){include '**/build/libs/*.jar'} +   fileTree('themes'){include '**/build/libs/*.war'} }

This will include all wars from the theme folder and all module jars from the modules folder. Since I'm being specific on the paths for files to include, any wars and jars that might happen to be polluting the directories will be avoided.

Maven Deployment Helper

The Maven Deployment Helper has a similar task, but of course you're going to use the pom to configure and you have a different command line.

The Maven equivalent of the Gradle config would be something along the lines of:

<build> <plugins> ... <plugin> <groupId>com.liferay</groupId> <artifactId>com.liferay.deployment.helper</artifactId> <version>1.0.4</version> <configuration> <deploymentFileNames> modules/my-mod-a/build/libs/my-mod-a-1.0.0.jar, modules/my-mod-b/build/libs/my-mod-b-1.0.0.jar, ..., themes/my-theme/build/libs/my-theme-1.0.0.war </deploymentFileNames> </configuration> </plugin> ... </plugins> </build>

Unfortunately you can't do some cool wildcard magic here, you're going to have to list out the ones to include.

Deployment Helper War

So you've built a war now using the Deployment Helper, but what does it contain? Here's a sample from one of my projects:

Basically you get a single class, the com.liferay.deployment.helper.servlet.DeploymentHelperContextListener class.

You also get a web.xml for the war.

And finally, you get all of the files that you listed for the deployment helper task.


You can find the source for DeploymentHelperContextListener here, but I'll give you a quick summary.

So we have two key methods, copy() and contextInitialized().

The copy() method does, well, the copying of data from the input stream (the artifact to be deployed) to the output stream (the target file in the Liferay deploy folder). Nothing fancy.

The contextInitialized() method is the implementation from the ContextListener interface and will be invoked when the application container has constructed the war's ServletContext.

If you scan the method, you can see how the parameters that are options to the plugins will eventually get to us via context parameters.

It then loops through the list of deployment filenames, and for each one in the list it will create the target file in the deploy directory (the Liferay deploy folder), and it will use the copy() method to copy the data out to the filesystem.

Lastly it will invoke the DeployManagerUtil.undeploy() on the current servlet context (itself) to attempt to remove the deployment helper permanently. Note that per the listed restrictions, this may not actually undeploy the Deployment Helper war.


The web.xml file is pretty boring:

<?xml version="1.0"?> <web-app xmlns=""   xmlns:xsi="" version="3.0"   xsi:schemaLocation=""> <context-param> <description>A comma delimited list of files in the WAR that should be deployed to the   deployment-path. The paths must be absolute paths within the WAR.</description> <param-name>deployment-files</param-name> <param-value>/WEB-INF/micro.maintainance.outdated.task-1.0.0.jar, /WEB-INF/, ...   </param-value> </context-param> <context-param> <description>The absolute path to the Liferay deploy folder on the target system.</description> <param-name>deployment-path</param-name> <param-value></param-value> </context-param> <listener> <listener-class>com.liferay.deployment.helper.servlet.DeploymentHelperContextListener</listener-class> </listener> </web-app>

Each of the files that are supposed to be deployed, they are listed as a parameter for the context listener.

The rest of the war is just the files themselves.

An Important Limitation

So there is one important limitation you should be aware of...

ContextListeners are invoked every time the container is restarted or the war is (re)deployed.

If your deployment helper cannot undeploy itself, every time you restart the container, all of your artifacts in the Deployment Helper war are going to be processed again.

So, as part of your deployment process, you should verify and ensure that the Deployment Helper has been undeployed, whether it can remove itself or whether you must manually undeploy from the centralized console.


So now, using the Deployment Helper, you can create a war file that contains files to deploy to Liferay: the module jars, the portlet wars, the theme wars and yes, you could even deploy Liferay .lpkg files, .lar files and even your licence xml file (for DXP).

You can create the Deployment Helper war file directly out of your build tools. If you are using CI, you can use the war as one of your tracked artifacts.

Your operations folks will be happy in that they can return to using the centralized admin console to deploy a war to all of the nodes, they won't need to copy everything to the target server's deploy folders manually.

They may gripe a little about deploying the bundle followed by an undeploy when the war has started, but you just need to remind them of the pain that you're saving them from the older, manual development process.


David H Nebinger 2018-09-06T18:03:00Z
Categories: CMS, ECM

Today's Top Ten: Ten Reasons to Avoid Sessions

Liferay - Wed, 09/05/2018 - 19:31

From the home office outside of Charleston, South Carolina, here are the top ten reasons to avoid Portlet and HTTP session storage:

Number 10 - With sticky sessions, traffic originating from a web address will be sent to the same node, even if other nodes in the cluster have more capacity. You lose some of the advantages of a cluster because you cannot direct traffic to nodes with available capacity, instead your users are stuck on the node they first landed on and hopefully someone doesn't start a heavyweight task on that node...

Number 9 - With sticky sessions, if the node fails, users on that node lose whatever was in the session store. The cluster will flip them over to an available node, but it cannot magically restore the session data.

Number 8 - If you use session replication to protect against node outage, the inter-node network traffic increases to pass around session data that most of the time is not necessary. You need it in case of node failure, but when was your last node failure? How much network and CPU capacity are you sacrificing for this "just in case" standby?

Number 7 - Neither sticky session data nor even session replication will help you in case of disaster recovery. If your primary datacenter goes off the net and your users are switched over to the DR site, you often have backup and files syncing to DR for recovery, but who sync's session data?

Number 6 - Session storage is a memory hit. Session data is typically kept in memory, so if you have 5k of session data per user but you get slashdotted and have 100k users hit your system, that's like a 500m memory hit you're going to take for session storage. If your numbers are bigger, well you can do the math. If you have replication, all of that data is replicated and retains on all nodes.

Number 5 - With sticky sessions, to upgrade a node you pull it out of the cluster, but now you need to wait for either folks to log out (ending their session), or wait for the sessions to expire (typically 30 minutes, but if you have an active user on that node, you could wait for a long time), or you can just kill the session and impact the user experience. All for what should be a simple process of taking the node out of circulation, updating the node, then getting it back into circulation.

Number 4 - Having a node out of circulation for a long time for sessions is a risk that your cluster will not be up to handling capacity, or it will be handling the capacity with fewer resources. In a two node cluster, if you have one cluster out of circulation in preparation for an update but the second fails, you have no active servers available to serve traffic.

Number 3 - In a massive cluster, session replication is not practical. The nodes will spend more time trying to keep each other's sessions in sync than they will serving real traffic.

Number 2 - Session timeouts discard session data, whether clients want that or not. If I put 3 items in my shopping cart but step away to play with my kids, when I come back in and log back in, those 3 items should still be there. Much of the data we would otherwise stuff into a session, a user can have a perspective that this inflight data should come back when they log back in.

And the number one reason to avoid sessions:

Number 1 - They are a hack!

Okay, so this was a little corny, I know. But it is also accurate and important.

All of these issues are things you will face deploying Liferay, your portlets, or really any web application. If you avoid session storage at all costs, you avoid all of these problems.

But I get it. As a developer, session storage is just so darn attractive and easy and alluring. Don't know where to put it but need it in the future? Session storage to the rescue!

But really, session storage is like drugs. If you start using them, you're going to get hooked. Before you know it you're going to be abusing sessions and your applications are going to suffer as a result. The really are better off avoided altogether.

There's a reason that shopping carts don't store themselves in sessions; they were too problematic. Your data is probably a lot more important than what kind of toothpaste I have in my cart, so if persistence is good enough for them, it is good enough for your data too.

And honestly, there are just so many better alternatives!

Have a multi-page form where you want to accumulate results for a single submit? Hidden fields with values from the previous page form elements will use client storage for this data.

Cookies are another way to push the storage to the browser, keep it off of the server and keep your server side code stateless.

Data storage in a NoSQL database like Mongo is very popular, can be shared across the cluster (no replication) and, since it is schemaless, can easily persist incomplete data. 

It's even possible to this in a relational database too.

So don't be lured in by the Siren's song. Avoid the doom they offer, and avoid session storage at all costs!

David H Nebinger 2018-09-06T00:31:00Z
Categories: CMS, ECM

The Good, The Bad and The Ugly

Liferay - Wed, 09/05/2018 - 11:34
The Ugly

In one of the first Liferay projects I ever did, I had a need to have some Roles in the environment. They needed to be there so I knew they were available for a custom portlet to assign when necessary.

I was working for an organization that had a structured Software Configuration Management process that was well defined and well enforced.

So code deployments were a big deal. There were real hard-copy deployment documents that itemized everything the Operations folks needed to do. As the developer, I didn't have access to any of the non-development environments, so anything that needed to be changed in the environment had to be part of a deployment and it had to be listed in the deployment documents.

And it was miserable. Sometimes I would forget to include the Role changes. Sometimes the Operations folks would skip the step I had in the docs. Sometimes they would fat-finger the data input and I'd end up with a Roel instead of my expected Role.

The Bad

So I quickly researched and implemented a Startup Hook.

For those that don't know about them, a Startup Hook was a Liferay-supported way to run code either at container start (Global) or at application start (Application). The big difference is that a container start only happens once, but an Application start happens when the container starts but also every time you would redeploy your hook.

Instead of having to include documentation telling the Operations folks how to create the roles I needed, my application startup hook would take care of that.

There were just three issues that I would have to code around:

The code runs at every startup (either Global or Application), so you need to check before doing something to see if maybe you had already completed the action. So I wouldn't want to keep trying to (and failing to) add a duplicate role, I have to check if it is there and only add it if it is not found.

There is no "memory" to work from, so each implementation must expect it could be running in any state. I had a number of roles I was using and I would often times need to add a couple of more. My startup action could not assume there were no roles, nor could it assume that the previous list of roles was done and only the newer ones needed to be added. No, I had to check each role individually as it was the only way to ensure my code would work in any environment it was deployed to.

The last issue, well that was a bug in my code that I couldn't fix. My startup action would run and would create a missing role. That was good. But if an administrator changed the role name, the next time my startup action would run, it would recreate the role. Because it was missing, see? Not because I had never created it, but because an administrator took steps to remove or rename it.

That last one was nasty. I could have taken the approach of populating a Release record using ReleaseLocalService, but that would be one other check that my startup action would need to perform. Pretty soon my simple startup action would turn into a development effort on its own.

The Good

With Liferay 7 CE and Liferay DXP, though, we have something better - the Upgrade Process. [Well, actually it was available in 6.2, but I didn't cotton to it until LR7, my bad]

I've blogged about this before because, well, I'm actually a huge fan.

By using an upgrade process, my old problem of creating the roles is easily handled. But even better, the problems that came from my startup action have been eliminated!

An upgrade process will only run once; if it successfully completes, it will not execute again. So my code to create the roles, it doesn't need to see if it was already done because it won't run twice.

An upgrade process has a memory in that it only runs the steps necessary to get to the latest version. So I can create some roles in 1.0.0, create additional roles in 1.1.0, I could remove a role in 1.1.1, add some new roles in 1.2.0... Regardless of the environment my module is deployed to, only the necessary upgrade processes will run. So a new deployment to a clean Liferay will run all of the upgrades, in order, but a deployment to an environment that had 1.1.1 will only execute the 1.2.0 upgrade.

The bug from the startup action process? It's gone. Since my upgrade will only run once, if an admin changes a role name or removes it, my upgrade process will not run again and recreate it.


Besides being an homage to one of the best movies ever made, I hope I've pointed to different ways that over time have been supported for preparing an environment to run your code.

The ugly way, the manual way, well you really want to discard this as it does carry a risk of failure due to user error.

The bad way, well it was the only way for Liferay before 7, but it has obvious issues and a new and better process exists to replace them.

The good way, the upgrade process, is a solid way to prepare an environment for running your module.

A good friend of mine, Todd, recently told me he wasn't planning on using an upgrade process because he was not doing anything with a Liferay upgrade.

I think this suggests that the "upgrade process" perhaps has an invalid name. Upgrade processes are not limited to Liferay upgrades.

Perhaps we should call them "Environment Upgrade Processes" as that would imply we are upgrading the environment, not just Liferay.

What do you think? Do you have a better name? If so, I'd love to hear it!

David H Nebinger 2018-09-05T16:34:00Z
Categories: CMS, ECM

Liferay IntelliJ Plugin 1.1.1 Released

Liferay - Tue, 09/04/2018 - 22:40


Liferay IntelliJ 1.1.1 plugin has been made available today. Head over to this page for downloading.


Release Highlights:


  • Watch task decorator improvements

  • Support for Liferay DXP Wildfly and CE Wildfly 7.0 and 7.1

  • Better integration for Liferay Workspace

  • Improved Editoring Support

    • Code completion for resource bundle keys for Liferay Taglib

    • code completion and syntax highlighting for Javascript in Liferay Taglibs

    • Better Java editor with OSGi annotations


Using Editors






Special Thanks

Thanks to Dominik Marks for the improvements.

Yanan Yuan 2018-09-05T03:40:00Z
Categories: CMS, ECM


Drupal - Mon, 09/03/2018 - 11:46
Completed Drupal site or project URL: business registration in North Rhine-Westphalia

Since the 1st of July 2018, the new "Gewerbe-Service-Portal.NRW" has been providing citizen-friendly eGovernment by allowing company founders in the German federal state North Rhine-Westphalia (NRW) to electronically register a business from home. The implementation was carried out by publicplan GmbH on behalf of d-NRW AöR. With the aid of a clearly arranged online form, commercial registration can be transmitted to the responsible public authorities with just a few clicks. Furthermore, an integrated chatbot helps the user with questions.

Service portal

In addition to the business registration, the portal offers information to the topic “foundation of an enterprise”. Furthermore, users have access to all service providers of the "Einheitliche Ansprechpartner NRW" (EA NRW). The online service supports specialised staff in taking up a service occupation or professional authentification. The search for a competent trading supervision department can also occur via the “Verwaltungssuchmaschine” (VSM) that was developed by d-NRW and publicplan GmbH on behalf of the “Ministerium für Wirtschaft, Innovation, Digitalisierung und Energie NRW” (MWIDE). The VSM is a search engine specialized for information about the public sector.

Business registration together with Chatbot “Guido“

"Guido" is a smart dialogue assistant for questions. He ensures automatic retrievability of information in plain language and is also able to identify each business type by a key. The chatbot determines every suitable business type by approaching the key through request of information. After successful determination, it is automatically transmitted to the form. Therefore, “Guido” saves the complicated search for many similar types of business. The director of publicplan GmbH, Dr. Christian Knebel says: "Thanks to our numerous eGovernment projects, we can draw on a wealth of experience in order to implement such a comprehensive portal. publicplan's integrated chatbot technology is the perfect complement to a contemporary citizen service."

Categories: CMS

Meet The Extenders

Liferay - Mon, 09/03/2018 - 11:40

As I spend more time digging around in the bowels of the Liferay source code, I'm always learning new things.

Recently I was digging in the Liferay extenders and thought I would share some of what I found.

Extender Pattern

So what is this Extender pattern anyway? Maybe you've heard about it related to Liferay's WAB extender or Spring extender or Xyz extender, maybe you have a guess about what they are but as long as they work, maybe that's good enough. If you're like me, though, you'd like to know more, so here goes...

The Extender Pattern is actually an official OSGi pattern.  The simplest and complete definition I found is:

The extender pattern is commonly used to provide additional functionality at run time based on bundle content. For this purpose, an extender bundle scans new bundles at a certain point in their life cycles and decides whether to take additional actions based on the scans. Additional actions might include creating extra resources, instantiating components, publishing services externally, and so forth. The majority of the functionality in the OSGi Service Platform Enterprise Specification is supported through extender bundles, most notably Blueprint, JPA, and WAB support. - Getting Started with the Feature Pack for OSGi Applications and JPA 2.0 by Sadtler, Haischt, Huber and Mahrwald.

But you can get a much larger introduction to the OSGi Extender Model if you want to learn about the pattern in more depth.

Long story short, an extender will inspect files in a bundle that is starting and can automate some functionality. So for DS, for example, it can find classes decorated with the @Component annotation and work with them. The extender takes care of the grunt work that we, as developers, would otherwise have to keep writing to register and start our component instances.

Liferay actually has a number of extenders, let's check them out...


This extender is responsible for wiring up the HttpTunnel servlet for the bundle. If the bundle has a header, Http-Tunnel, it will have a tunnel servlet wired up for it. It will actually create a number of supporting service registrations:

  • AuthVerifierFilter for authentication verification (verification via the AuthVerify pipeline, includes BasicAuth and Liferay session auth).
  • ServletContextHelper for the servlet context for the bundle.
  • Servlet for the tunnel servlet itself. For those that don't know, the tunnel servlet allows for sending commands to a remote Liferay instance via the protected tunnel servlet and is a core part of how remote staging works.

Yep, the theme contributors are implemented using the Extender pattern.

This extender looks for either the Liferay-Theme-Contributor-Type header from the bundle (via your bnd.bnd file) or if there is a package.json file, it looks for the themeContributorType. It also ensures that resources are included in the bundle.

The ThemeContributorExtender will then register two services for the bundle, the first is a ThemeContributorPortalWebResources instance for exposing bundle resources as Portal Web Resources, the other is an instance of BundleWebResources to actually expose and return the bundle resources.


The configurator extender is kind of interesting in that there doesn't seem to be any actual implementations using this.

Basically if there is a bundle header, Liferay-Configuration-Path, the configurator extender will basically use properties files in this path to set configuration in Configuration Admin. I'm guessing this may be useful if you wanted to force a configuration through a bundle deployment, i.e. if you wanted to push a bundle to partially change the ElasticSearch configuration.


The language extender handles the resource bundle handling for the bundle. If the bundle has the liferay.resource.bundle capability, it creates the extension that knows how to parse the capability string (the one that often includes base names, aggregate contexts, etc) and registers a special aggregating Resource Bundle Loader into OSGi.


This is the magical extender that makes ServiceBuilder work in Liferay 7 CE and Liferay 7 DXP... This one is going to take some more area to document.

The extender works for all bundles with a Liferay-Spring-Context header (pulled in from bnd.bnd of course).

First the ModuleApplicationContextExtender creates a ModuleApplicationContextRegistrator instance which creates a ModuleApplicationContext instance for the bundle that will be the Spring context the modules beans are created in/from. It uses the Liferay-Service and Liferay-Spring-Context bundle headers to identify the Spring context xml files and the ModuleApplicationContext will be used by Spring to instantiate everything.

It next creates the BeanLocator for the bundle (remember those from the Liferay 6 days? It is how the portal can find services when requested in the templates and scripting control panel).

The ModuleApplicationContextRegistrator then initializes the services and finally registers each of the Spring beans in the bundle as OSGi components (that's why you can @Reference SB services without them actually being declared as @Components in the SB implementation classes).

The ModuleApplicationContextExtender isn't done yet though. The Spring initialization was to prepare a dynamically created component, managed by the OSGi Dependency Manager so that OSGi will take over the management (lifecycle) of the new Spring context.

If the bundle has a Liferay-Require-SchemaVersion header, the ModuleApplicationContextExtender will add the requirement for the listed version as a component dependency. This is how the x.y.z version of the service implementation gets bound specifically to the x.y.z version of the API module. It is also why it is important to make sure that the Liferay-Require-SchemaVersion header is kept in sync with the version you stamp on the API module, and also why it is also important to actually remember to bump your module version numbers when you change the service.xml file and/or the method signatures on your entity or service classes.

The  ModuleApplicationContextExtender has a final responsibility, the initial creation of the SB tables, indexes and sequences. Normally when we are dealing with upgrades, we must manually create our UpgradeStepRegistrator components and have them register upgrade steps to be called when a version is changing. But the one upgrade step we never have to write is the ServiceBuilder's 0.0.0 to x.y.z upgrade step. The ModuleApplicationContextExtender automatically registers the upgrade step to basically apply the scripts to create the tables, indexes and sequences.

So yeah, there's a lot going on here. If you wondered how your service module gets exposed to Liferay, Spring and OSGi, well now you know.


I actually think this extender is poorly named (my own opinion). The name WabFactory implies (to me) that it may only be for WABs, but it actually covers generic Servlet stuff as well.

Liferay uses the OSGi HTTP Whiteboard service for exposing all servlets, filters, etc. Your REST service? HTTP Whiteboard. Your JSP portlet? Yep, HTTP Whiteboard. The Theme Contributor? Yep, same. Simple servlets? Yep. Your "legacy" portlet WAR that gets turned into a WAB by Liferay, it too is exposed via the HTTP Whiteboard.

This is a good thing, of course, because the dynamism of OSGi Declarative Services, your various implementations can be added, removed and restarted without affecting the container.

But, what you may not know, the HTTP Whiteboard is very agnostic towards implementation details. For example, it doesn't talk about JSP handling at all. Nor the REST handling, MIME types, etc. It really talks to how services can be registered such that the HTTP Whiteboard will be able to delegate incoming requests to the correct service implementations.

To that end, there's a series of steps to perform for vanilla HTTP Whiteboard registration. First you need the Http service reference so you can register. You need to create a ServletContextHandler for the bundle (to expose your bundle resources as the servlet context), and then you use it to register your servlets, your filters and your listeners that come from the bundle (or elsewhere). That's a lot of boilerplate code that deals with the interaction with OSGi; if you're a servlet developer, you shouldn't need to master all of those OSGi aspects.

And then there's still JSP to deal with, REST service, etc. A lot of stuff that you'd need to do.

But we don't, because of the WabFactory.

It is the WabFactory, for example, that processes the Web-ContextPath and Web-ContextName bundle headers so we don't have to decorate every servlet with the HTTP Whiteboard properties. It registers the ServletContextHelper instance for the bundle and also binds all of the servlets, filters and listeners to the servlet context. It registers a JspServlet so the JSP files in the bundle can be compiled and used (you did realize that Liferay is compiling bundle JSPs, and not your application container, right?).

There's actually a lot of other functionality baked in here, in the portal-osgi-web-wab-extender module, I would encourage you to dig through it if you want to understand how all of this magic happens.  (7.1)

This is a new extender introduced in 7.1.

This extender actually uses two different bundle headers, Liferay-JS-Resources-Top-Head and Liferay-JS-Resources-Top-Head-Authenticated to handle the inclusion of JS resources in the top of the page, possibly using different resources for guest vs authenticated sessions.

This new extender is basically deprecating the old javascript.barebone.files and javascript.everything.files properties from and allows modules to supply their own scripts dynamically to include in the HTML page's <head /> area.

You can actually see this being used in the new frontend-js-web module's bnd.bnd file.  You can override using a custom module and a Liferay-Top-Head-Weight bundle header to define a higher service ranking.


So we started out with a basic definition of the OSGi Extender pattern. We then made a hard turn towards the concrete implementations currently used in Liferay.

I hope you can see how these extenders are actually an important part of Liferay 7 CE and Liferay 7 DXP, especially from a development perspective.

If the extenders weren't there, as developers we would need to be generating a lot of boilerplate code. If you tear into the ModuleApplicationContextExtender and everything it is doing for our ServiceBuilder implementations, stop and ask yourself how productive your day would be if you had to wire that all up ourselves? How many bugs, etc would we otherwise have had?

Perhaps this review of Liferay Extenders may give you some idea of how you can eliminate boilerplate code in your own projects...

David H Nebinger 2018-09-03T16:40:00Z
Categories: CMS, ECM

Simplifying the Compatibility Matrix

Liferay - Tue, 08/28/2018 - 15:10

One of the initiatives our Quality Assurance Team has taken the past few years, has been to try and automate all their testing. This has resulted in testing all of our compatible environments in a similar fashion.  Thus, we no longer need to distinguish between Supported and Certified environments.


Starting with Liferay DXP 7.0, we will now list all our compatible environments together.  We hope this will make the decision to use Liferay DXP even easier.


Here is the new Compatibility Matrix for 7.0 and the Compatibility Matrix for 7.1.

David Truong 2018-08-28T20:10:00Z
Categories: CMS, ECM

Joomla 3.8.12 Release

Joomla! - Tue, 08/28/2018 - 08:45

Joomla 3.8.12 is now available. This is a security release for the 3.x series of Joomla which addresses 3 security vulnerabilities and contains over 20 bug fixes and improvements.

Categories: CMS

New Liferay Project SDK Installers 3.3.0 GA1 Released

Liferay - Mon, 08/27/2018 - 21:16

We are pleased to announce the new release of Liferay Project SDK Installers version 3.3.0 GA1.




For customers, they can download all of them on the customer studio download page.


Upgrade From previous 3.2:


  1. Download updatesite here

  2. Go to Help > Install New Software… > Add…

  3. Select Archive...Browse to the downloaded updatesite

  4. Click OK to close Add repository dialog

  5. Select all features to upgrade then click > Next, again click > Next and accept the license agreements

  6. Finish and restart to complete the upgrade

Release highlights:


Installers Improvements:


Add option to install Developer Studio only


Developer Studio Improvements and Fixes:


1. Code Upgrade Tool Improvements

  • upgrade to Liferay 7.1 support
    • convert Liferay Plugins SDK 6.2 to Liferay Workspace 7.0 or 7.1
    • convert Liferay Workspace 7.0 to Liferay Workspace 7.1
  • added Liferay DXP/7.1 breaking changes
  • various performance improvements

2. Enabled dependencies management for Target Platform

3. Fixed source lookup during watch task


Using Code Upgrade Tool






If you run into any issues or have any suggestions please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2018-08-28T02:16:00Z
Categories: CMS, ECM

Revisiting OSGi DS Annotations

Liferay - Sat, 08/25/2018 - 08:51

I've been asked a couple of times recently about different aspects of @Modified annotation that I'm not sure have been made clear in the documentation, so I wanted to cover the lifecycle annotations in a little further detail so we can use them effectively.

The @Activate, @Deactivate and @Modified annotations are used for lifecycle event notifications for the DS components. They get called when the component itself is activated, deactivated or modified and allow you to take appropriate action as a result.

One important note - these lifecycle events will only be triggered if your @Component is actually alive. I know, sounds kind of weird, but it can happen. If you have an @Reference which is mandatory but is not available, your @Component will not be alive and your @Activate (and other) method will not be invoked.


This annotation is used to notify your component that it is now loaded, resolved and ready to provide service. You use this method to do some final setup in your component. It is equivalent to Spring's afterPropertiesSet() InitializingBean interface.

One of the cool things about the @Activate method is that the signature for the method that you are creating is not fixed. You can have zero, one or more of the following parameters:

Map<String, Object> propertiesThe property map from the @Component properties, also can contain your ConfigurationAdmin properties. BundleContext bundleContextThe bundle context for the bundle that holds the component that is being activated. Saves you from having to look it up, is great when you want to enable a ServiceTracker. ComponentContextThe component context contains the above objects, but most of all it has context information about the component itself.


So the following @Activate methods would all individually be considered valid:

@Activate protected void activate() {...} @Activate protected void afterPropertiesSet(Map<String,Object> props,   BundleContext bCtx) { ... } @Activate public void dontCallMe(BundleContext bundleContext, Map<String, Object> properties,   ComponentContext componentContext) { ... }

So we can use any method name (although Liferay tends to stick with activate()), any combination of parameters in any order we want.


Hopefully this is obvious that it is invoked when the component is about to be deactivated. What might not be so obvious is when, exactly, it is called.

Basically you can be sure that your component context is still valid (nothing has been done to it yet), but it is about to happen. You want to use this lifecycle event to clean up before your component goes away.

So if you have a ServiceTracker open, this is your chance to close it. If you have a file open or a DB connection or any resource, use this as your entry point to clean it all up.

Like the @Activate annotation, the method signature for the @Deactivate methods is variable. It supports all of the same parameters as @Activate, but it has an additional value, an int which will hold the deactivation reason.

I've never been worried about the deactivation reason myself, but I suppose there are good use cases for receiving them. If you want to see the codes and explanations, check out this Felix ticket:


So this one is a fun one, one that is not really documented that well, IMHO.

You can find blogs, etc where it boils down to "this method is called when the service context changes", but there is an assumption there that you understand what the service context is in the first place.

The tl;dr version is that this method is invoked mostly when your configuration changes, ala Configuration Admin.

For example, most of us will have some kind of code in our @Activate method to receive the properties for the component (from the @Component properties and also from the Config Admin), and we tend to copy the value we need into our component, basically caching it so we don't have to track it down later when we need it.

This is fine, of course, as long as no one goes to the control panel and changes your Config Admin properties.

When that happens, you won't have the updated value. Your only option in this case is to restart (the service, the component, or the container will all result in your getting the new value).

But that's kind of a pain, don't you think? Change a config, restart the container?

Enter @Modified. @Modified is how you get notified of the changes to the Config Admin properties, either via a change in the control panel or a change to the osgi/configs/<my pid>.config files.

When you have an @Modified annotation, you can update your local cache value and then you won't require a restart when the data changes.

Note that sometimes you'll see Liferay code like:

@Activate @Modified protected void activate(Map<String, Object> properties) { ... }

where the same method is used for both lifecycle events. This is fine, but you have to ensure that you're not doing anything in the method that you don't want invoked for the @Modified call.

For example, if you're starting a ServiceTracker, an @Modified notification will have you restarting the tracker unless you are careful in your implementation.

I often find it easier just to use separate methods so I don't have to worry about it.

The method signature for your @Modified methods follows the same rules as @Activate, so all of the same parameter types are allowed. You can still get to the bundle context or the component context if you need to, but often times this may not be necessary.


So there you kind of have it. There is not a heck of a lot more to it.

With @Activate, you get the lifecycle notification that your component is starting. With @Deactivate, you can clean up before your component is stopped. And with @Modified, you can avoid the otherwise required and pesky restart.



David H Nebinger 2018-08-25T13:51:00Z
Categories: CMS, ECM

Liferay 7.1 CE GA1 OSGi Service / Portlet / Maven sample

Liferay - Fri, 08/24/2018 - 03:14

Hi everybody,

With the version 7.X we have all started to play with OSGi and for some projects it's sometimes hard to have a clean start with the correct build tool; despite the blade sample which are definitely useful.

I wanted to share through this blog, a sample with an OSGi service and a portlet which are configured to be deployed in a Liferay 7.1.

The code given can be compiled with maven :

A repository has to be added into the settings.xml : to be able to find the version 3.0.0 of the liferay kernel.


Let's see some details on the sample given :

1 - OSGi service module

In this module we have an interface called OsgiService and his implementation OsgiServiceImpl.

The OsgiService is into the package xxx.api which is declared as export package into the bnd.

The MANIFEST.MF generated once the compilation is done :

Manifest-Version: 1.0 Bnd-LastModified: 1535096462818 Bundle-ManifestVersion: 2 Bundle-Name: osgi-simple-module Bundle-SymbolicName: Bundle-Version: Created-By: 1.8.0_111 (Oracle Corporation) Export-Package:;version="0.0.1" Import-Package: Private-Package: Provide-Capability: osgi.service;objectClass:List<String>="  al.osgi.simple.api.OsgiService" Require-Capability:;filter:="(&(" Service-Component: OSGI-INF/  Impl.xml Tool: Bnd-


When the OSGi service module is started in liferay, you will be able to see the service is correctly exposed (felix webconsole view) :

With the Service ID 9245, you can see the type of the service exposed and the implementation below.


2 - OSGi portlet

The portlet is a MVCPortlet with jsp where we inject the OsgiService and we called the log method on the portlet default render.

Let's see the MANIFEST.MF of the portlet :

Manifest-Version: 1.0 Bnd-LastModified: 1535097259074 Bundle-ManifestVersion: 2 Bundle-Name: osgi-simple-portlet Bundle-SymbolicName: Bundle-Version: Created-By: 1.8.0_111 (Oracle Corporation) Import-Package: com.liferay.portal.kernel.portlet.bridges.mvc;version=  "[1.5,2)",javax.portlet;version="[2.0,3)",  pi;version="[0.0,1)",com.liferay.portal.kernel.log;version="[7.0,8)",  javax.servlet;version="[3.0,4)",javax.servlet.http;version="[3.0,4)" Private-Package:,  nt.web.model,content Provide-Capability: osgi.service;objectClass:List<String>="javax.portl  et.Portlet",liferay.resource.bundle;"  al.osgi-simple-portlet";"content.Language" Require-Capability: osgi.extender;filter:="(&(osgi.extender=jsp.taglib  )(uri=",osgi.extender;filter:="(&(os  gi.extender=jsp.taglib)(uri=",osgi.extend  er;filter:="(&(osgi.extender=jsp.taglib)(uri=  ortlet))",osgi.extender;filter:="(&(osgi.extender=jsp.taglib)(uri=htt  p://",osgi.extender;filter:="(&(osgi.extender=  jsp.taglib)(uri=",osgi.extender;filter:="(  &(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))",  osgi.service;filter:="(  Service)";effective:=active,;filter:="(&(  on=1.8))" Service-Component: OSGI-INF/  rtlet.xml Tool: Bnd-


In the Apache Felix webconsole, we can see the OsgiService is a service used by the portlet and satisfy by the osgi simple module (service #9245) :


I hope this quick tutorial will help some of you to your next development on Liferay 7.X.

Feel free to add comments.


Best regards,


David Bougearel 2018-08-24T08:14:00Z
Categories: CMS, ECM
Syndicate content