AdoptOS

Assistance with Open Source adoption

ECM

Devcon 2018

Liferay - Wed, 09/12/2018 - 03:25

Did you already book your ticket for Devcon 2018? Early Bird ends in a few hours (14 Sep 2018) and I hear that the Unconference is solidly booked (not yet sold out, but on a good path to be sold out very soon).

If you have or have not been at a past Devcon, but need more reasons to come again: The agenda is now online, together with a lot of information and quotes from past attendees on the main website. You'll have a chance to meet quite a number of Liferay enthusiasts, consultants, developers and advocates from Liferay's community. Rumors (substantiated in the agenda) are that David Nebinger will share his experience on Upgrading to Liferay 7.1, and is able to do so in 30 minutes. And if you've ever read his blogs or the forum posts, you know that he's covering a lot of ground and loves to share his knowledge and experience. And I see numerous other topics, from the Developer Story to React, from OSGi to Commerce and AI.

Any member of Liferay's team that you see at Devcon is there for one reason: To talk to you. If you've ever wondered about a question, wanted to lobby for putting something on the roadmap, or just need a reason for a certain design decision: That's the place where you can find someone responsible, on any level of Liferay's architecture or product guidance. Just look at the names on the agenda, and expect a lot more of Liferay staff to be on site in addition to those named. And, of course, a number of curious and helpful community members as well.

And if you still need yet another reason: Liferay Devcon is consistently the conference with - by far - the best coffee you can get at a conference. The bar is high, and we're aiming at surpassing it again with the help of Thomas Schweiger.

(If you're more interested in business- than this nerd-stuff, we have events like LDSF in other places in Europe. If Europe is too far, consider NAS or another event close to you. But if this nerdy stuff is for you, you should really consider to come)

(Did I mention that the Unconference will sell out? Don't come crying to me if you don't get a ticket because you were too late. You have been warned.)

 

(Images: article's title photo: CC-by-2.0 Jeremy Keith,  AMS silhouette from Devcon site)

Olaf Kock 2018-09-12T08:25:00Z
Categories: CMS, ECM

Building charts for multiple products

Liferay - Tue, 09/11/2018 - 03:28

With Liferay releasing new products such as Analytics Cloud and Commerce we decided to cover the need for charts by providing an open source library.

 

The technology

Clay, our main implementation of Lexicon, created Clay charts. These charts are built on top of Billboard.js where many contributions have been done by Julien Castelain and other Liferay developers. Billboard.js is an adaptation layer on top of D3.js, probably the most known and used for data visualization these days.

 

The issue

Although Billboard.js is a very good framework, it was not covering all our needs in terms of interaction design and visual design. Therefore, we have been working on top of it, contributing some work and keeping the rest of it inside Lexicon and Clay.

Improving accessibility

Improving the accessibility aspect for different charts was one of our first contributions to Billboard.js. We provided 3 different possible properties that help to differentiate the data before having to include colors.

 

  • 9 different dashed strokes styles for the line charts that helps to follow the shape of each line.   

  • 9 different shapes to use as dots inside the line charts and the legend that helps to read the points in each line.

  • 9 different background patterns to be used on shaped charts like the doughnut chart or the bar chart, adding this property to a chart background helps to recognise the different shapes even if the colors are similar.

Here is a clear example so you can see how the user would perceive, read and follow the different data from the line chart, without the direct use of colors.

Creating a color palette

Color is one of the first properties that users would perceive along shapes and lines, making it our next priority. We needed a flexible color palette that allowed us to represent different types of data.

This set is composed by 9 different and ordered colors that are meant to be used in shaped charts as background or in line charts as borders.

Each of these colors can be divided into 9 different shades using a Sass generator. It is useful to generate a gradient chart to cover all the standard situations for charts. 
Here’s an example using the color blue:

Ideally, to take advantage of these colors use the charts over a white background.

Warning: using these colors for texts won’t reach the minimum contrast ratio requested by W3C. Using a legend, tooltips and popovers to provide text information is the best course of action.

  Introducing a base interaction layer

The idea behind the design of these interactions is to provide a consistent and simple base for all charts. This increases predictability and reduces the learning curve.

These interactions are based on the main events (click/hover/tap) applied to the common elements in our charts: axis, legend, isolated elements or grouped elements.
We also reinforce the visualization, with highlights, between related information and extended information, displayed through popovers.

As you can see in the example below, the Stacked bar and the Heatmap share the same interaction of the mouse hover to create a highlight on the selected data. This is done without any change to the main color when focusing on an element, but instead decreasing the opacity of the other elements.

In addition to this, each designer can extend these interactions depending on their products as well as working on advanced interactions. So, if they need specific actions such as a different result on hover, data filters, or data settings they can add them to the chart as a customization.

  Conclusion

Working with D3.js allowed us to focus on our project details such as accessibility, colors and interaction, adding quality to the first version of Charts and meeting the deadline at the same time.

Thanks to the collaboration with Billboard.js we were able to help another open source project and as a result, share our work with the world.

You can find more information about Clay, Charts and other components directly inside the  Lexicon Site.

Emiliano Cicero 2018-09-11T08:28:00Z
Categories: CMS, ECM

How Page Fragments help the new web experience in Liferay Portal 7.1

Liferay - Mon, 09/10/2018 - 03:50

This is the second post explaining the new Web Experience functionalities released in version 7.1 of Liferay Portal.  As presented in the previous post, in order to empower business users, it is necessary to have Page Fragment collections available.

 

But, what are they and what is their use?

 

Page Fragments are “reusable page parts” created by web developers to be used by non-technical users. Page Fragments are designed and conceived to be assembled by marketers or business users to build Content Pages. To start creating Page Fragments, we will be required to add a Page Fragment collection (we will look at the tools available in a moment), but first...

 

How are Page Fragments developed?

 

Page Fragments are implemented by web developers using  HTML, CSS and JavaScript. The markup of the Page Fragment is in regular HTML, but it can contain some special tags. Effectively, two main types of lfr tags add functionality to Page Fragments:

 

  • lfr-editable” is used to make a fragment section editable. This can be extensive to plain “text”, “image” or “rich-text”, depending on which of the 3 “type” options is used. Rich text provides an WYSIWYG editor for editing before publication.

 

<lfr-editable id=”unique-id” type="text"> This is editable text! </lfr-editable> <lfr-editable id="unique-id" type="image"> <img src="..."> </lfr-editable> <lfr-editable id=”unique-id” type="rich-text"> <h1>This is editable rich text!</h1> <p>It may contain almost any HTML elements</p> </lfr-editable>

 

  • “lfr-widget-<>” are a group of tags used to embed widgets within Page Fragments and, therefore, to add dynamic behaviour. The corresponding widget name should be added in <>. For instance, “nav” for Navigation Menu widget, “web-content” for Web Content Display or “form” for Forms widget.

 

<div class=”nav-widget”> <lfr-widget-nav> </lfr-widget-nav> </div>

 

If you would like to get more detail on how to build Page Fragments, you can check  the Liferay 7.1 Documentation section on Developing Page Fragments.


 

What tools are there for the creation and management of Page Fragments?

 

Page Fragment collections are stored and organized from a new administration application.  There we will find access to the Page Fragment editor, where front-end developers will be able to create new collections or edit existing ones. Importing/ exporting collections is also an option.


 


 

Interesting additional functionality is the “View Usages” option found in the kebab menu of the Page Fragments in use. It allows to track the usage of a given Page Fragment as in where and when it has been used, namely, in which Pages, Page Templates and Display Pages it has been used and which version each of them is using. Page Fragments are designed to allow business users to do inline editing, which means web developers are not expected make changes to the actual content, but some other edits are likely to be necessary-  adjust size of image, position of text…-. To provide support to these scenarios, by default, changes are not propagated to Page Fragments in use, but the web developer is given a tool for bulk change propagation that will apply to selected usages within the list.

 

To finalize, remind you that should you be interested in leveraging on an example, Themes with an existing set of Page Fragments are downloadable for free from the Marketplace. Both Fjord and Westeros Bank have now been updated for 7.1

 

Also, remember that you can check the “Building Engaging Websites” free course available on Liferay University.

Ianire Cobeaga 2018-09-10T08:50:00Z
Categories: CMS, ECM

Extending OSGi Components

Liferay - Fri, 09/07/2018 - 20:46

A few months ago, in the Community Chat, one of our community members raised the question, "Why does Liferay prefer public pages over private pages?" For example, if you select the "Go to Site" option, if there are both private pages and public pages, Liferay sends you to the public pages.

Unfortunately, I don't have an answer to that question. However, through some experimentation, I am able to help answer a closely related question: "Is it possible to get Liferay to prefer private pages over public pages?"

Find an Extension Point

Before you can actually do any customization, you need to find out what is responsible for the behavior that you are seeing. Once you find that, it'll then become apparent what extension points are available to bring about the change you want to see in the product.

So to start off, you need to determine what is responsible for generating the URL for the "Go to Site" option.

Usually, the first thing to do is to run a search against your favorite search engine to see if anyone has explained how it works, or at least tried a similar customization before. If you're extremely fortunate, you'll find a proof of concept, or at least a vague outline of what you need to do, which will cut down on the amount of time it takes for you to implement your customization. If you're very fortunate, you'll find someone talking about how the overall design of the feature, which will give you some hints on where you should look.

Sadly, in most cases, your search will probably come up empty. That's what would have happened in this case.

If you have no idea what to do, the next thing you should try is to ask someone if they have any ideas on how to implement your customization. For example, you might post on the Liferay community forums or you might try asking a question on Stackoverflow. If you have a Liferay expert nearby, you can ask them to see if they have any ideas on where to look.

If there are no Liferay experts, or if you believe yourself to be one, the next step is to go ahead and find it yourself. To do so, you will need a copy of the Liferay Portal source (shallow clones are also adequate for this purpose, which is beneficial because a full clone of Liferay's repository is on the order of 10 GB now), and then search for what's responsible for the behavior that you are seeing within that source code.

Find the Language Key

Since what we're changing has a GUI element with text, it necessarily has a language key responsible for displaying that text. This means that if you search through all the Language.properties files in Liferay, you should be able to find something that reads, "Go to Site". If you're using a different language, you'll need to search through Language_xx_YY.properties files instead.

git ls-files | grep -F Language.properties | xargs grep -F "Go to Site"

In this case, searching for "Go to Site" will lead you to the com.liferay.product.navigation.site.administration module's language files, which tell us that the language key that reads "Go to Site" corresponds to the key go-to-site.

Find the Frontend Code

It's usually safe to assume that the language file where the key is declared is also the module that uses it, which means we can restrict our search to just the module where the Language.properties lives.

git ls-files modules/apps/web-experience/product-navigation/product-navigation-site-administration | \ xargs grep -Fl go-to-site

This will give us exactly one result.

Replace the Original JSP

If you were to choose to modify this JSP, a natural approach would be to follow the tutorial on JSP Overrides Using OSGi Fragments, and then call it a day.

With that in mind, a simple way to get the behavior we want is to let Liferay generate the URL, and then do a straight string replacement changing /web to /group (or /user if it's a user personal site) if we know that the site has private pages.

<%@page import="com.liferay.portal.kernel.util.StringUtil" %> <%@page import="com.liferay.portal.util.PropsValues" %> <% Group goToSiteGroup = siteAdministrationPanelCategoryDisplayContext.getGroup(); String goToSiteURL = siteAdministrationPanelCategoryDisplayContext.getGroupURL(); if (goToSiteGroup.getPrivateLayoutsPageCount() > 0) { goToSiteURL = StringUtil.replaceFirst( goToSiteURL, PropsValues.LAYOUT_FRIENDLY_URL_PUBLIC_SERVLET_MAPPING, goToSiteGroup.isUser() ? PropsValues.LAYOUT_FRIENDLY_URL_PRIVATE_USER_SERVLET_MAPPING : PropsValues.LAYOUT_FRIENDLY_URL_PRIVATE_GROUP_SERVLET_MAPPING); } %> <aui:a cssClass="goto-link list-group-heading" href="<%= goToSiteURL %>" label="go-to-site" /> Copy More Original Code

Now, let's imagine that we want to also worry about the go-to-other-site site selector and update it to provide the URLs we wanted. Investigating the site selector investigation would lead you to item selectors, which would lead you to the MySitesItemSelectorView and RecentSitesItemSelectorView, which would take you to view_sites.jsp.

We can see that there are three instances where it generates the URL by calling GroupURLProvider.getGroup directly: line 83, line 111, and line 207. We would simply follow the same pattern as before in each of these three instances, and we'd be finished with our customization.

If additional JSP changes would be needed, this process of adding JSP overrides and replacement bundles would continue.

Extend the Original Component

While we were lucky in this case and found that we could fix everything just by modifying just two JSPs, we won't always be this lucky. Therefore, let's take the opportunity to understand if there's a different way to solve the problem.

Following the path down to all the different things that this JSP calls leads us to a few options for which extension point we can potentially override in order to get the behavior we want.

First, we'll want to ask: is the package the class lives in exported? If it is not, we'll need to either rebuild the manifest of the original module to provide the package in Export-Package, or we will need to add our classes directly to an updated version of the original module. The latter is far less complicated, but the module would live in osgi/marketplace/override, which is not monitored for changes by default (see the module.framework.auto.deploy.dirs portal property).

From there, you'll want to ask the question: what kind of Java class is it? In particular, you would ask dependency management questions. Is an instance of it managed via Spring? Is it retrieved via a static method? Is there a factory we can replace? Is it directly instantiated each time it's needed?

Once you know how it's instantiated, the next question is how you can change its value where it's used. If there's a dependency management framework involved, we make the framework aware of our new class. For a direct instantiation, then depending on the Liferay team that maintains the component, you might see these types of variables injected as request attributes (which you would handle by injecting your own class for the request attribute), or you might see this instantiated directly in the JSP.

Extend a Component in a Public Package

Let's start with SiteAdministrationPanelCategoryDisplayContext. Digging around in the JSP, you discover that, unfortunately, it's just a regular Java object and the constructor is called in site_administration_body.jsp. Since this is just a plain old Java object that we instantiate from the JSP (it's not a managed dependency), which makes it a bad choice for an extension point unless you want to replace the class definition.

What about GroupURLProvider? Well, it turns out that GroupURLProvider is an OSGi component, which means its lifecycle is managed by OSGi. This means that we need to make OSGi aware of our customization, and then replace the existing component with our component, which will provide a different implementation of the getGroupURL method which prefers private URLs over public URLs.

From a "can I extend this in a different bundle, or will I need to replace the existing bundle" perspective, we're fortunate (the class is inside of a -api module, where everything is exported), and we can simply extend and override the class from a different bundle. The steps are otherwise identical, but it's nice knowing that you're modifying a known extension point.

Next, we declare our extension as an OSGi component.

@Component( immediate = true, service = GroupURLProvider.class ) public class PreferPrivatePagesGroupURLProvider extends GroupURLProvider { }

If you're the type of person who likes to sanity check after each small change by deploying your update, there's a wrinkle you will run into right here.

If you deploy this component and then blacklist the original component by following the instructions on Blacklisting OSGi Modules and Components, you'll run into a NullPointerException. This is because OSGi doesn't fulfill any of the references on the parent class, so when it calls methods on the original GroupURLProvider, none of the references that the code thought would be satisfied actually are satisfied, and it just calls methods on null objects.

You can address part of the problem by using bnd to analyze the parent class for protected methods and protected fields by adding -dsannotations-options: inherit.

-dsannotations-options: inherit

Of course, setting field values using methods is uncommon, and the things Liferay likes to attach @Reference to are generally private variables, and so almost everything will still be null even after this is added. In order to work around that limitation, you'll need to use Java reflection. In a bit of an ironic twist, the convenience utility for replacing the private fields via reflection will also be a private method.

@Reference(unbind = "unsetHttp") protected void setHttp(Http http) throws Exception { _setSuperClassField("_http", http); } @Reference(unbind = "unsetPortal") protected void setPortal(Portal portal) throws Exception { _setSuperClassField("_portal", portal); } protected void unsetHttp(Http http) throws Exception { _setSuperClassField("_http", null); } protected void unsetPortal(Portal portal) throws Exception { _setSuperClassField("_portal", null); } private void _setSuperClassField(String name, Object value) throws Exception { Field field = ReflectionUtil.getDeclaredField( GroupURLProvider.class, name); field.set(this, value); } Implement the New Business Logic

Now that we've extended the logic, what we'll want to do is steal all of the logic from the original method (GroupURLProvider.getGroupURL), and then flip the order on public pages and private pages, so that private pages are checked first.

@Override protected String getGroupURL( Group group, PortletRequest portletRequest, boolean includeStagingGroup) { ThemeDisplay themeDisplay = (ThemeDisplay)portletRequest.getAttribute( WebKeys.THEME_DISPLAY); // Customization START // Usually Liferay passes false and then true. We'll change that to // instead pass true and then false, which will result in the Go to Site // preferring private pages over public pages whenever both are present. String groupDisplayURL = group.getDisplayURL(themeDisplay, true); if (Validator.isNotNull(groupDisplayURL)) { return _http.removeParameter(groupDisplayURL, "p_p_id"); } groupDisplayURL = group.getDisplayURL(themeDisplay, false); if (Validator.isNotNull(groupDisplayURL)) { return _http.removeParameter(groupDisplayURL, "p_p_id"); } // Customization END if (includeStagingGroup && group.hasStagingGroup()) { try { if (GroupPermissionUtil.contains( themeDisplay.getPermissionChecker(), group, ActionKeys.VIEW_STAGING)) { return getGroupURL(group.getStagingGroup(), portletRequest); } } catch (PortalException pe) { _log.error( "Unable to check permission on group " + group.getGroupId(), pe); } } return getGroupAdministrationURL(group, portletRequest); }

Notice that in copying the original logic, we need a reference to _http. We can either replace that with HttpUtil, or we can store our own private copy of _http. So that the code looks as close to the original as possible, we'll store our own private copy of _http.

@Reference(unbind = "unsetHttp") protected void setHttp(Http http) throws Exception { _http = http; _setSuperClassField("_http", http); } protected void unsetHttp(Http http) throws Exception { _http = null; _setSuperClassField("_http", null); } Manage a Component's Lifecycle

At this point, all we have to do is disable the old component, which we can do by following the instructions on Blacklisting OSGi Modules and Components.

However, what if we wanted to do that at the code level rather than at the configuration level? Maybe we want to simply deploy our bundle and have everything just work without requiring any manual setup.

Disable the Original Component on Activate

First, you need to know that the component exists. If you attempt to disable the component before it exists, that's not going to do anything for you. We know it exists, once a @Reference is satisfied. However, because we're going to disable it immediately upon realizing it exists, we want to make it optional. This leads us to the following rough outline, where we call _deactivateExistingComponent once we have our reference satisfied.

@Reference( cardinality = ReferenceCardinality.OPTIONAL, policy = ReferencePolicy.DYNAMIC, policyOption = ReferencePolicyOption.GREEDY, target = "(component.name=com.liferay.site.util.GroupURLProvider)", unbind = "unsetGroupURLProvider" ) protected void setGroupURLProvider(GroupURLProvider groupURLProvider) throws Exception { _deactivateExistingComponent(); } protected void unsetGroupURLProvider(GroupURLProvider groupURLProvider) { }

Next, you need to be able to access a ServiceComponentRuntime, which provides a disableComponent method. We can get access to this with another @Reference. If we exported this package, we would probably want this to be set using a method for reasons we experienced earlier that required us to implement our _setSuperClassField method, but for now, we'll be content with leaving it as private.

@Reference private ServiceComponentRuntime _serviceComponentRuntime;

Finally, in order to call ServiceComponentRuntime.disableComponent, you need to generate a ComponentDescriptionDTO, which coincidentally needs just a name and the bundle that holds the component. In order to get the Bundle, you need to have the BundleContext.

@Activate public void activate( ComponentContext componentContext, BundleContext bundleContext, Map<String, Object> config) throws Exception { _bundleContext = bundleContext; _deactivateExistingComponent(); } private void _deactivateExistingComponent() throws Exception { if (_bundleContext == null) { return; } String componentName = GroupURLProvider.class.getName(); Collection<ServiceReference<GroupURLProvider>> serviceReferences = _bundleContext.getServiceReferences( GroupURLProvider.class, "(component.name=" + componentName + ")"); for (ServiceReference serviceReference : serviceReferences) { Bundle bundle = serviceReference.getBundle(); ComponentDescriptionDTO description = _serviceComponentRuntime.getComponentDescriptionDTO( bundle, componentName); _serviceComponentRuntime.disableComponent(description); } } Enable the Original Component on Deactivate

If we want to be a good OSGi citizen, we also want to make sure that the original component is still available whenever we stop or undeploy our module. This is really the same thing in reverse.

@Deactivate public void deactivate() throws Exception { _activateExistingComponent(); _bundleContext = null; } private void _activateExistingComponent() throws Exception { if (_bundleContext == null) { return; } String componentName = GroupURLProvider.class.getName(); Collection<ServiceReference<GroupURLProvider>> serviceReferences = _bundleContext.getServiceReferences( GroupURLProvider.class, "(component.name=" + componentName + ")"); for (ServiceReference serviceReference : serviceReferences) { Bundle bundle = serviceReference.getBundle(); ComponentDescriptionDTO description = _serviceComponentRuntime.getComponentDescriptionDTO( bundle, componentName); _serviceComponentRuntime.enableComponent(description); } }

Once we deploy our change, we find that a few other components in Liferay are using the GroupURLProvider provided by OSGi. Among these is the go-to-other-site site selector, which would have required another bundle replacement with the previous approach.

Minhchau Dang 2018-09-08T01:46:00Z
Categories: CMS, ECM

Gradle: compile vs compileOnly vs compileInclude

Liferay - Fri, 09/07/2018 - 12:55

By request, a blog to explain compile vs compileOnly vs compileInclude...

First it is important to understand that these are actually names for various configurations used during the build process, but specifically  when it comes to the dependency management. In Maven, these are implemented as scopes.

Each time one of these three types are listed in your dependencies {} section, you are adding the identified dependency to the configuration.

The three configurations all add a dependency to the compile phase of the build, when your java code is being compiled into bytecode.

The real difference these configurations have is on the manifest of the jar when it is built.

compile

So compile is the one that is easiest to understand. You are declaring a dependency that your java code needs to compile cleanly.

As a best practice, you only want to list compile dependencies on those libraries that you really need to compile your code. For example, you might be needing group: 'org.apache.poi', name: 'poi-ooxml', version: '4.0.0' for reading and writing Excel spreadsheets, but you wouldn't want to spin out to http://mvnrepository.com/artifact/org.apache.poi/poi-ooxml/4.0.0 and then declare a compile dependency on everything POI needs.  As transitive dependencies, Gradle will handle those for you.

When the compile occurs, this dependency will be included in the classpath for javac to compile your java source file.

Additionally, when it comes time to build the jar, packages that you use in your java code from POI will be added as Import-Package manifest entries.

It is this addition which will result in the "Unresolved Reference" error about the missing package if it is not available from some other module in the OSGi container.

For those with a Maven background, the compile configuration is the same as Maven's compile scope.

compileOnly

The compileOnly configuration is used to itemize a dependency that you need to compile your code, same as compile above.

The difference is that packages your java code use from a compileOnly dependency will not be listed as Import-Package manifest entries.

The common example for using compileOnly typically resolves around use of annotations. I like to use FindBugs on my code (don't laugh, it has saved my bacon a few times and I feel I deliver better code when I follow its suggestions). Sometimes, however, FindBugs gets a false positive result, something it thinks is a bug but I know it is exactly how I need it to be.

So the normal solution here is to add the @SuppressFBWarninsg annotation on the method; here's one I used recently:

@SuppressFBWarnings(value = "NP_NULL_PARAM_DEREF", justification = "Code allocates always before call.") public void emit(K key) { ... }

FindBugs was complaining that I didn't check key for null, but it is actually emitting within the processing of a Map entry, so the key can never be null. Rather than add the null check, I added the annotation.

To use the annotation, I of course need to include the dependency:

compileOnly 'com.google.code.findbugs:annotations:3.0.0'

I used compileOnly in this case because I only need the annotation for the compile itself; the compile will strip out the annotation info from the bytecode because it is not a runtime annotation, so I do not need this dependency after the compile is done.

And I definitely don't want it showing up in the Import-Package manifest entry.

In OSGi, we will also tend to use compileOnly for the org.osgi.core and osgi.cmpn dependencies, not because we don't need them at runtime, but because we know that within an OSGi container these packages will always be provided (so the Manifest does not need to enforce it) plus we might want to use our jar outside of an OSGi container.

For those with a Maven background, the compileOnly configuration is similar to Maven's provided scope.

compileInclude

compileInclude is the last configuration to cover. Like compile and compileOnly,  this configuration will include the dependency in the compile classpath.

The compileInclude configuration was actually introduced by Liferay and is included in Liferay's Gradle plugins.

The compileInclude configuration replaces the manual steps from my OSGi Depencencies blog post, option #4, including the jars in the bundle.

In fact, everything my blog talks about with adding the Bundle-ClassPath directive and the -includeresource instruction to the bnd.bnd file, well the compileInclude does this. Where compileInclude shines, though, is that it will also include some of the transitive dependencies into the module as well.

Note how I said that some of the transitive dependencies are included? I haven't quite figured out how it decides which transitive dependencies to include, but I do know it is not always 100% correct. I've had cases where it missed a particular transitive dependency. I do know it will not include optional dependencies and that may have been the cause in those cases. To fix it though, I would just add a compileInclude configuration line for the missing transitive dependency.

You can disable the transitive dependency inclusion by adding a flag at the end of the declaration. For example, if I only wanted poi-ooxml but for some reason didn't want it's transitive dependencies, I could use the following:

compileInclude group: 'org.apache.poi', name: 'poi-ooxml', version: '4.0.0', transitive:false

It's then up to you to include or exclude the transitive dependencies, but at least you won't need to manually update the bnd.bnd file.

If you're getting the impression that compileInclude will mostly work but may make some bad choices (including some you don't want and excluding some that you need), you would be correct. It will never offer you the type of precise control you can have by using Bundle-ClassPath and -includeresource. It just happens to be a lot less work.

For those who use Maven, I'm sorry but you're kind of out of luck as there is no corresponding Maven scope for this one.

Conclusion

I hope this clarifies whatever confusion you might have with these three configurations.

If you need recommendations of what configuration to use when, I guess I would offer the following:

  • For packages that you know will be available from the OSGi container, use the compile configuration. This includes your custom code from other modules. all com.liferay code, etc.
  • For packages that you do not want from the OSGi container or don't think will be provided by the OSGi container, use the compileInclude configuration. This is basically all of those third party libraries that you won't be pushing as modules to the container.
  • For all others, use the compileOnly configuration.

Enjoy!

David H Nebinger 2018-09-07T17:55:00Z
Categories: CMS, ECM

Can We Get a Little Help Over Here?

Liferay - Thu, 09/06/2018 - 13:03
Introduction

One of the benefits that you get from an enterprise-class JEE application server is a centralized administration console.

Rather than needing to manage nodes individually like you would with Tomcat, the JEE admin console can work on the whole cluster at one time.

But, with Liferay 7 CE and Liferay 7 DXP and the deployment of OSGi bundle jars, portlet/theme wars and Liferay lpkg files, the JEE admin console cannot be used to push your shiny new module or even a theme war file because it won't know to drop these files in the Liferay deploy folder.

Enter the Deployment Helper

So Liferay created this Maven and Gradle plugin called the Deployment Helper to give you a hand here.

Using the Deployment Helper, the last part of your build is the generation of a war file that contains the bundle jars and theme wars, but is a single war artifact.

This artifact can be deployed to all cluster nodes using the centralized admin console.

Adding the Deployment Helper to the Build

To add the Deployment Helper to your Gradle-based build: https://dev.liferay.com/en/develop/reference/-/knowledge_base/7-0/deployment-helper-gradle-plugin

To add the Deployment Helper to your Maven-based build: https://dev.liferay.com/en/develop/reference/-/knowledge_base/7-0/deployment-helper-plugin

While both pages offer the technical details, they are still awfully terse when it comes to usage.

Gradle Deployment Helper

Basically for Gradle you get a new task, the buildDeploymentHelper task. You can execute gradlew buildDeploymentHelper on the command line after including the plugin and you'll get a war file, but probably one that you'll want to configure.

The plugin is supposed to pull in all jar files for you, so that will cover all of your modules. You'll want to update the deploymentFiles to include your theme wars and any of the artifacts you might be pulling in from the legacy SDK.

In the example below, my Liferay Gradle Workspace has the following build.gradle file:

buildscript { dependencies { classpath group: "com.liferay", name: "com.liferay.gradle.plugins", version: "3.12.48" classpath group: "com.liferay", name: "com.liferay.gradle.plugins.deployment.helper", version: "1.0.3" } repositories { maven { url "https://repository-cdn.liferay.com/nexus/content/groups/public" } } } apply plugin: "com.liferay.deployment.helper" buildDeploymentHelper { deploymentFiles = fileTree('modules'){include '**/build/libs/*.jar'} +   fileTree('themes'){include '**/build/libs/*.war'} }

This will include all wars from the theme folder and all module jars from the modules folder. Since I'm being specific on the paths for files to include, any wars and jars that might happen to be polluting the directories will be avoided.

Maven Deployment Helper

The Maven Deployment Helper has a similar task, but of course you're going to use the pom to configure and you have a different command line.

The Maven equivalent of the Gradle config would be something along the lines of:

<build> <plugins> ... <plugin> <groupId>com.liferay</groupId> <artifactId>com.liferay.deployment.helper</artifactId> <version>1.0.4</version> <configuration> <deploymentFileNames> modules/my-mod-a/build/libs/my-mod-a-1.0.0.jar, modules/my-mod-b/build/libs/my-mod-b-1.0.0.jar, ..., themes/my-theme/build/libs/my-theme-1.0.0.war </deploymentFileNames> </configuration> </plugin> ... </plugins> </build>

Unfortunately you can't do some cool wildcard magic here, you're going to have to list out the ones to include.

Deployment Helper War

So you've built a war now using the Deployment Helper, but what does it contain? Here's a sample from one of my projects:

Basically you get a single class, the com.liferay.deployment.helper.servlet.DeploymentHelperContextListener class.

You also get a web.xml for the war.

And finally, you get all of the files that you listed for the deployment helper task.

DeploymentHelperContextListener

You can find the source for DeploymentHelperContextListener here, but I'll give you a quick summary.

So we have two key methods, copy() and contextInitialized().

The copy() method does, well, the copying of data from the input stream (the artifact to be deployed) to the output stream (the target file in the Liferay deploy folder). Nothing fancy.

The contextInitialized() method is the implementation from the ContextListener interface and will be invoked when the application container has constructed the war's ServletContext.

If you scan the method, you can see how the parameters that are options to the plugins will eventually get to us via context parameters.

It then loops through the list of deployment filenames, and for each one in the list it will create the target file in the deploy directory (the Liferay deploy folder), and it will use the copy() method to copy the data out to the filesystem.

Lastly it will invoke the DeployManagerUtil.undeploy() on the current servlet context (itself) to attempt to remove the deployment helper permanently. Note that per the listed restrictions, this may not actually undeploy the Deployment Helper war.

web.xml

The web.xml file is pretty boring:

<?xml version="1.0"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee"   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="3.0"   xsi:schemaLocation="http://java.sun.com/xml/ns/javaee   http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <context-param> <description>A comma delimited list of files in the WAR that should be deployed to the   deployment-path. The paths must be absolute paths within the WAR.</description> <param-name>deployment-files</param-name> <param-value>/WEB-INF/micro.maintainance.outdated.task-1.0.0.jar, /WEB-INF/fragment.com.liferay.portal.search.web-1.0.0.jar, ...   </param-value> </context-param> <context-param> <description>The absolute path to the Liferay deploy folder on the target system.</description> <param-name>deployment-path</param-name> <param-value></param-value> </context-param> <listener> <listener-class>com.liferay.deployment.helper.servlet.DeploymentHelperContextListener</listener-class> </listener> </web-app>

Each of the files that are supposed to be deployed, they are listed as a parameter for the context listener.

The rest of the war is just the files themselves.

An Important Limitation

So there is one important limitation you should be aware of...

ContextListeners are invoked every time the container is restarted or the war is (re)deployed.

If your deployment helper cannot undeploy itself, every time you restart the container, all of your artifacts in the Deployment Helper war are going to be processed again.

So, as part of your deployment process, you should verify and ensure that the Deployment Helper has been undeployed, whether it can remove itself or whether you must manually undeploy from the centralized console.

Conclusion

So now, using the Deployment Helper, you can create a war file that contains files to deploy to Liferay: the module jars, the portlet wars, the theme wars and yes, you could even deploy Liferay .lpkg files, .lar files and even your licence xml file (for DXP).

You can create the Deployment Helper war file directly out of your build tools. If you are using CI, you can use the war as one of your tracked artifacts.

Your operations folks will be happy in that they can return to using the centralized admin console to deploy a war to all of the nodes, they won't need to copy everything to the target server's deploy folders manually.

They may gripe a little about deploying the bundle followed by an undeploy when the war has started, but you just need to remind them of the pain that you're saving them from the older, manual development process.

Enjoy!

David H Nebinger 2018-09-06T18:03:00Z
Categories: CMS, ECM

Today's Top Ten: Ten Reasons to Avoid Sessions

Liferay - Wed, 09/05/2018 - 19:31

From the home office outside of Charleston, South Carolina, here are the top ten reasons to avoid Portlet and HTTP session storage:

Number 10 - With sticky sessions, traffic originating from a web address will be sent to the same node, even if other nodes in the cluster have more capacity. You lose some of the advantages of a cluster because you cannot direct traffic to nodes with available capacity, instead your users are stuck on the node they first landed on and hopefully someone doesn't start a heavyweight task on that node...

Number 9 - With sticky sessions, if the node fails, users on that node lose whatever was in the session store. The cluster will flip them over to an available node, but it cannot magically restore the session data.

Number 8 - If you use session replication to protect against node outage, the inter-node network traffic increases to pass around session data that most of the time is not necessary. You need it in case of node failure, but when was your last node failure? How much network and CPU capacity are you sacrificing for this "just in case" standby?

Number 7 - Neither sticky session data nor even session replication will help you in case of disaster recovery. If your primary datacenter goes off the net and your users are switched over to the DR site, you often have backup and files syncing to DR for recovery, but who sync's session data?

Number 6 - Session storage is a memory hit. Session data is typically kept in memory, so if you have 5k of session data per user but you get slashdotted and have 100k users hit your system, that's like a 500m memory hit you're going to take for session storage. If your numbers are bigger, well you can do the math. If you have replication, all of that data is replicated and retains on all nodes.

Number 5 - With sticky sessions, to upgrade a node you pull it out of the cluster, but now you need to wait for either folks to log out (ending their session), or wait for the sessions to expire (typically 30 minutes, but if you have an active user on that node, you could wait for a long time), or you can just kill the session and impact the user experience. All for what should be a simple process of taking the node out of circulation, updating the node, then getting it back into circulation.

Number 4 - Having a node out of circulation for a long time for sessions is a risk that your cluster will not be up to handling capacity, or it will be handling the capacity with fewer resources. In a two node cluster, if you have one cluster out of circulation in preparation for an update but the second fails, you have no active servers available to serve traffic.

Number 3 - In a massive cluster, session replication is not practical. The nodes will spend more time trying to keep each other's sessions in sync than they will serving real traffic.

Number 2 - Session timeouts discard session data, whether clients want that or not. If I put 3 items in my shopping cart but step away to play with my kids, when I come back in and log back in, those 3 items should still be there. Much of the data we would otherwise stuff into a session, a user can have a perspective that this inflight data should come back when they log back in.

And the number one reason to avoid sessions:

Number 1 - They are a hack!

Okay, so this was a little corny, I know. But it is also accurate and important.

All of these issues are things you will face deploying Liferay, your portlets, or really any web application. If you avoid session storage at all costs, you avoid all of these problems.

But I get it. As a developer, session storage is just so darn attractive and easy and alluring. Don't know where to put it but need it in the future? Session storage to the rescue!

But really, session storage is like drugs. If you start using them, you're going to get hooked. Before you know it you're going to be abusing sessions and your applications are going to suffer as a result. The really are better off avoided altogether.

There's a reason that shopping carts don't store themselves in sessions; they were too problematic. Your data is probably a lot more important than what kind of toothpaste I have in my cart, so if persistence is good enough for them, it is good enough for your data too.

And honestly, there are just so many better alternatives!

Have a multi-page form where you want to accumulate results for a single submit? Hidden fields with values from the previous page form elements will use client storage for this data.

Cookies are another way to push the storage to the browser, keep it off of the server and keep your server side code stateless.

Data storage in a NoSQL database like Mongo is very popular, can be shared across the cluster (no replication) and, since it is schemaless, can easily persist incomplete data. 

It's even possible to this in a relational database too.

So don't be lured in by the Siren's song. Avoid the doom they offer, and avoid session storage at all costs!

David H Nebinger 2018-09-06T00:31:00Z
Categories: CMS, ECM

The Good, The Bad and The Ugly

Liferay - Wed, 09/05/2018 - 11:34
The Ugly

In one of the first Liferay projects I ever did, I had a need to have some Roles in the environment. They needed to be there so I knew they were available for a custom portlet to assign when necessary.

I was working for an organization that had a structured Software Configuration Management process that was well defined and well enforced.

So code deployments were a big deal. There were real hard-copy deployment documents that itemized everything the Operations folks needed to do. As the developer, I didn't have access to any of the non-development environments, so anything that needed to be changed in the environment had to be part of a deployment and it had to be listed in the deployment documents.

And it was miserable. Sometimes I would forget to include the Role changes. Sometimes the Operations folks would skip the step I had in the docs. Sometimes they would fat-finger the data input and I'd end up with a Roel instead of my expected Role.

The Bad

So I quickly researched and implemented a Startup Hook.

For those that don't know about them, a Startup Hook was a Liferay-supported way to run code either at container start (Global) or at application start (Application). The big difference is that a container start only happens once, but an Application start happens when the container starts but also every time you would redeploy your hook.

Instead of having to include documentation telling the Operations folks how to create the roles I needed, my application startup hook would take care of that.

There were just three issues that I would have to code around:

The code runs at every startup (either Global or Application), so you need to check before doing something to see if maybe you had already completed the action. So I wouldn't want to keep trying to (and failing to) add a duplicate role, I have to check if it is there and only add it if it is not found.

There is no "memory" to work from, so each implementation must expect it could be running in any state. I had a number of roles I was using and I would often times need to add a couple of more. My startup action could not assume there were no roles, nor could it assume that the previous list of roles was done and only the newer ones needed to be added. No, I had to check each role individually as it was the only way to ensure my code would work in any environment it was deployed to.

The last issue, well that was a bug in my code that I couldn't fix. My startup action would run and would create a missing role. That was good. But if an administrator changed the role name, the next time my startup action would run, it would recreate the role. Because it was missing, see? Not because I had never created it, but because an administrator took steps to remove or rename it.

That last one was nasty. I could have taken the approach of populating a Release record using ReleaseLocalService, but that would be one other check that my startup action would need to perform. Pretty soon my simple startup action would turn into a development effort on its own.

The Good

With Liferay 7 CE and Liferay DXP, though, we have something better - the Upgrade Process. [Well, actually it was available in 6.2, but I didn't cotton to it until LR7, my bad]

I've blogged about this before because, well, I'm actually a huge fan.

By using an upgrade process, my old problem of creating the roles is easily handled. But even better, the problems that came from my startup action have been eliminated!

An upgrade process will only run once; if it successfully completes, it will not execute again. So my code to create the roles, it doesn't need to see if it was already done because it won't run twice.

An upgrade process has a memory in that it only runs the steps necessary to get to the latest version. So I can create some roles in 1.0.0, create additional roles in 1.1.0, I could remove a role in 1.1.1, add some new roles in 1.2.0... Regardless of the environment my module is deployed to, only the necessary upgrade processes will run. So a new deployment to a clean Liferay will run all of the upgrades, in order, but a deployment to an environment that had 1.1.1 will only execute the 1.2.0 upgrade.

The bug from the startup action process? It's gone. Since my upgrade will only run once, if an admin changes a role name or removes it, my upgrade process will not run again and recreate it.

Conclusion

Besides being an homage to one of the best movies ever made, I hope I've pointed to different ways that over time have been supported for preparing an environment to run your code.

The ugly way, the manual way, well you really want to discard this as it does carry a risk of failure due to user error.

The bad way, well it was the only way for Liferay before 7, but it has obvious issues and a new and better process exists to replace them.

The good way, the upgrade process, is a solid way to prepare an environment for running your module.

A good friend of mine, Todd, recently told me he wasn't planning on using an upgrade process because he was not doing anything with a Liferay upgrade.

I think this suggests that the "upgrade process" perhaps has an invalid name. Upgrade processes are not limited to Liferay upgrades.

Perhaps we should call them "Environment Upgrade Processes" as that would imply we are upgrading the environment, not just Liferay.

What do you think? Do you have a better name? If so, I'd love to hear it!

David H Nebinger 2018-09-05T16:34:00Z
Categories: CMS, ECM

Liferay IntelliJ Plugin 1.1.1 Released

Liferay - Tue, 09/04/2018 - 22:40

 

Liferay IntelliJ 1.1.1 plugin has been made available today. Head over to this page for downloading.

 

Release Highlights:

 

  • Watch task decorator improvements

  • Support for Liferay DXP Wildfly and CE Wildfly 7.0 and 7.1

  • Better integration for Liferay Workspace

  • Improved Editoring Support

    • Code completion for resource bundle keys for Liferay Taglib

    • code completion and syntax highlighting for Javascript in Liferay Taglibs

    • Better Java editor with OSGi annotations

 

Using Editors

 

 

 

 

 

Special Thanks

Thanks to Dominik Marks for the improvements.

Yanan Yuan 2018-09-05T03:40:00Z
Categories: CMS, ECM

Meet The Extenders

Liferay - Mon, 09/03/2018 - 11:40
Introduction

As I spend more time digging around in the bowels of the Liferay source code, I'm always learning new things.

Recently I was digging in the Liferay extenders and thought I would share some of what I found.

Extender Pattern

So what is this Extender pattern anyway? Maybe you've heard about it related to Liferay's WAB extender or Spring extender or Xyz extender, maybe you have a guess about what they are but as long as they work, maybe that's good enough. If you're like me, though, you'd like to know more, so here goes...

The Extender Pattern is actually an official OSGi pattern.  The simplest and complete definition I found is:

The extender pattern is commonly used to provide additional functionality at run time based on bundle content. For this purpose, an extender bundle scans new bundles at a certain point in their life cycles and decides whether to take additional actions based on the scans. Additional actions might include creating extra resources, instantiating components, publishing services externally, and so forth. The majority of the functionality in the OSGi Service Platform Enterprise Specification is supported through extender bundles, most notably Blueprint, JPA, and WAB support. - Getting Started with the Feature Pack for OSGi Applications and JPA 2.0 by Sadtler, Haischt, Huber and Mahrwald.

But you can get a much larger introduction to the OSGi Extender Model if you want to learn about the pattern in more depth.

Long story short, an extender will inspect files in a bundle that is starting and can automate some functionality. So for DS, for example, it can find classes decorated with the @Component annotation and work with them. The extender takes care of the grunt work that we, as developers, would otherwise have to keep writing to register and start our component instances.

Liferay actually has a number of extenders, let's check them out...

com.liferay.portal.remote.http.tunnel.extender.internal.HttpTunnelExtender

This extender is responsible for wiring up the HttpTunnel servlet for the bundle. If the bundle has a header, Http-Tunnel, it will have a tunnel servlet wired up for it. It will actually create a number of supporting service registrations:

  • AuthVerifierFilter for authentication verification (verification via the AuthVerify pipeline, includes BasicAuth and Liferay session auth).
  • ServletContextHelper for the servlet context for the bundle.
  • Servlet for the tunnel servlet itself. For those that don't know, the tunnel servlet allows for sending commands to a remote Liferay instance via the protected tunnel servlet and is a core part of how remote staging works.
com.liferay.frontend.theme.contributor.extender.internal.ThemeContributorExtender

Yep, the theme contributors are implemented using the Extender pattern.

This extender looks for either the Liferay-Theme-Contributor-Type header from the bundle (via your bnd.bnd file) or if there is a package.json file, it looks for the themeContributorType. It also ensures that resources are included in the bundle.

The ThemeContributorExtender will then register two services for the bundle, the first is a ThemeContributorPortalWebResources instance for exposing bundle resources as Portal Web Resources, the other is an instance of BundleWebResources to actually expose and return the bundle resources.

com.liferay.portal.configuration.extender.internal.ConfiguratorExtender

The configurator extender is kind of interesting in that there doesn't seem to be any actual implementations using this.

Basically if there is a bundle header, Liferay-Configuration-Path, the configurator extender will basically use properties files in this path to set configuration in Configuration Admin. I'm guessing this may be useful if you wanted to force a configuration through a bundle deployment, i.e. if you wanted to push a bundle to partially change the ElasticSearch configuration.

com.liferay.portal.language.extender.internal.LanguageExtender

The language extender handles the resource bundle handling for the bundle. If the bundle has the liferay.resource.bundle capability, it creates the extension that knows how to parse the capability string (the one that often includes base names, aggregate contexts, etc) and registers a special aggregating Resource Bundle Loader into OSGi.

com.liferay.portal.spring.extender.internal.context.ModuleApplicationContextExtender

This is the magical extender that makes ServiceBuilder work in Liferay 7 CE and Liferay 7 DXP... This one is going to take some more area to document.

The extender works for all bundles with a Liferay-Spring-Context header (pulled in from bnd.bnd of course).

First the ModuleApplicationContextExtender creates a ModuleApplicationContextRegistrator instance which creates a ModuleApplicationContext instance for the bundle that will be the Spring context the modules beans are created in/from. It uses the Liferay-Service and Liferay-Spring-Context bundle headers to identify the Spring context xml files and the ModuleApplicationContext will be used by Spring to instantiate everything.

It next creates the BeanLocator for the bundle (remember those from the Liferay 6 days? It is how the portal can find services when requested in the templates and scripting control panel).

The ModuleApplicationContextRegistrator then initializes the services and finally registers each of the Spring beans in the bundle as OSGi components (that's why you can @Reference SB services without them actually being declared as @Components in the SB implementation classes).

The ModuleApplicationContextExtender isn't done yet though. The Spring initialization was to prepare a dynamically created component, managed by the OSGi Dependency Manager so that OSGi will take over the management (lifecycle) of the new Spring context.

If the bundle has a Liferay-Require-SchemaVersion header, the ModuleApplicationContextExtender will add the requirement for the listed version as a component dependency. This is how the x.y.z version of the service implementation gets bound specifically to the x.y.z version of the API module. It is also why it is important to make sure that the Liferay-Require-SchemaVersion header is kept in sync with the version you stamp on the API module, and also why it is also important to actually remember to bump your module version numbers when you change the service.xml file and/or the method signatures on your entity or service classes.

The  ModuleApplicationContextExtender has a final responsibility, the initial creation of the SB tables, indexes and sequences. Normally when we are dealing with upgrades, we must manually create our UpgradeStepRegistrator components and have them register upgrade steps to be called when a version is changing. But the one upgrade step we never have to write is the ServiceBuilder's 0.0.0 to x.y.z upgrade step. The ModuleApplicationContextExtender automatically registers the upgrade step to basically apply the scripts to create the tables, indexes and sequences.

So yeah, there's a lot going on here. If you wondered how your service module gets exposed to Liferay, Spring and OSGi, well now you know.

com.liferay.portal.osgi.web.wab.extender.internal.WabFactory

I actually think this extender is poorly named (my own opinion). The name WabFactory implies (to me) that it may only be for WABs, but it actually covers generic Servlet stuff as well.

Liferay uses the OSGi HTTP Whiteboard service for exposing all servlets, filters, etc. Your REST service? HTTP Whiteboard. Your JSP portlet? Yep, HTTP Whiteboard. The Theme Contributor? Yep, same. Simple servlets? Yep. Your "legacy" portlet WAR that gets turned into a WAB by Liferay, it too is exposed via the HTTP Whiteboard.

This is a good thing, of course, because the dynamism of OSGi Declarative Services, your various implementations can be added, removed and restarted without affecting the container.

But, what you may not know, the HTTP Whiteboard is very agnostic towards implementation details. For example, it doesn't talk about JSP handling at all. Nor the REST handling, MIME types, etc. It really talks to how services can be registered such that the HTTP Whiteboard will be able to delegate incoming requests to the correct service implementations.

To that end, there's a series of steps to perform for vanilla HTTP Whiteboard registration. First you need the Http service reference so you can register. You need to create a ServletContextHandler for the bundle (to expose your bundle resources as the servlet context), and then you use it to register your servlets, your filters and your listeners that come from the bundle (or elsewhere). That's a lot of boilerplate code that deals with the interaction with OSGi; if you're a servlet developer, you shouldn't need to master all of those OSGi aspects.

And then there's still JSP to deal with, REST service, etc. A lot of stuff that you'd need to do.

But we don't, because of the WabFactory.

It is the WabFactory, for example, that processes the Web-ContextPath and Web-ContextName bundle headers so we don't have to decorate every servlet with the HTTP Whiteboard properties. It registers the ServletContextHelper instance for the bundle and also binds all of the servlets, filters and listeners to the servlet context. It registers a JspServlet so the JSP files in the bundle can be compiled and used (you did realize that Liferay is compiling bundle JSPs, and not your application container, right?).

There's actually a lot of other functionality baked in here, in the portal-osgi-web-wab-extender module, I would encourage you to dig through it if you want to understand how all of this magic happens.

com.liferay.frontend.js.top.head.extender.internal.TopHeadExtender  (7.1)

This is a new extender introduced in 7.1.

This extender actually uses two different bundle headers, Liferay-JS-Resources-Top-Head and Liferay-JS-Resources-Top-Head-Authenticated to handle the inclusion of JS resources in the top of the page, possibly using different resources for guest vs authenticated sessions.

This new extender is basically deprecating the old javascript.barebone.files and javascript.everything.files properties from portal-ext.properties and allows modules to supply their own scripts dynamically to include in the HTML page's <head /> area.

You can actually see this being used in the new frontend-js-web module's bnd.bnd file.  You can override using a custom module and a Liferay-Top-Head-Weight bundle header to define a higher service ranking.

Conclusion

So we started out with a basic definition of the OSGi Extender pattern. We then made a hard turn towards the concrete implementations currently used in Liferay.

I hope you can see how these extenders are actually an important part of Liferay 7 CE and Liferay 7 DXP, especially from a development perspective.

If the extenders weren't there, as developers we would need to be generating a lot of boilerplate code. If you tear into the ModuleApplicationContextExtender and everything it is doing for our ServiceBuilder implementations, stop and ask yourself how productive your day would be if you had to wire that all up ourselves? How many bugs, etc would we otherwise have had?

Perhaps this review of Liferay Extenders may give you some idea of how you can eliminate boilerplate code in your own projects...

David H Nebinger 2018-09-03T16:40:00Z
Categories: CMS, ECM

Simplifying the Compatibility Matrix

Liferay - Tue, 08/28/2018 - 15:10

One of the initiatives our Quality Assurance Team has taken the past few years, has been to try and automate all their testing. This has resulted in testing all of our compatible environments in a similar fashion.  Thus, we no longer need to distinguish between Supported and Certified environments.

 

Starting with Liferay DXP 7.0, we will now list all our compatible environments together.  We hope this will make the decision to use Liferay DXP even easier.

 

Here is the new Compatibility Matrix for 7.0 and the Compatibility Matrix for 7.1.

David Truong 2018-08-28T20:10:00Z
Categories: CMS, ECM

New Liferay Project SDK Installers 3.3.0 GA1 Released

Liferay - Mon, 08/27/2018 - 21:16

We are pleased to announce the new release of Liferay Project SDK Installers version 3.3.0 GA1.

 

Download:

 

For customers, they can download all of them on the customer studio download page.

 

Upgrade From previous 3.2:

 

  1. Download updatesite here

  2. Go to Help > Install New Software… > Add…

  3. Select Archive...Browse to the downloaded updatesite

  4. Click OK to close Add repository dialog

  5. Select all features to upgrade then click > Next, again click > Next and accept the license agreements

  6. Finish and restart to complete the upgrade

Release highlights:

 

Installers Improvements:

 

Add option to install Developer Studio only

 

Developer Studio Improvements and Fixes:

 

1. Code Upgrade Tool Improvements

  • upgrade to Liferay 7.1 support
    • convert Liferay Plugins SDK 6.2 to Liferay Workspace 7.0 or 7.1
    • convert Liferay Workspace 7.0 to Liferay Workspace 7.1
  • added Liferay DXP/7.1 breaking changes
  • various performance improvements

2. Enabled dependencies management for Target Platform

3. Fixed source lookup during watch task

 

Using Code Upgrade Tool

 

 

 

 

Feedback

If you run into any issues or have any suggestions please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2018-08-28T02:16:00Z
Categories: CMS, ECM

Revisiting OSGi DS Annotations

Liferay - Sat, 08/25/2018 - 08:51

I've been asked a couple of times recently about different aspects of @Modified annotation that I'm not sure have been made clear in the documentation, so I wanted to cover the lifecycle annotations in a little further detail so we can use them effectively.

The @Activate, @Deactivate and @Modified annotations are used for lifecycle event notifications for the DS components. They get called when the component itself is activated, deactivated or modified and allow you to take appropriate action as a result.

One important note - these lifecycle events will only be triggered if your @Component is actually alive. I know, sounds kind of weird, but it can happen. If you have an @Reference which is mandatory but is not available, your @Component will not be alive and your @Activate (and other) method will not be invoked.

@Activate

This annotation is used to notify your component that it is now loaded, resolved and ready to provide service. You use this method to do some final setup in your component. It is equivalent to Spring's afterPropertiesSet() InitializingBean interface.

One of the cool things about the @Activate method is that the signature for the method that you are creating is not fixed. You can have zero, one or more of the following parameters:

Map<String, Object> propertiesThe property map from the @Component properties, also can contain your ConfigurationAdmin properties. BundleContext bundleContextThe bundle context for the bundle that holds the component that is being activated. Saves you from having to look it up, is great when you want to enable a ServiceTracker. ComponentContextThe component context contains the above objects, but most of all it has context information about the component itself.

 

So the following @Activate methods would all individually be considered valid:

@Activate protected void activate() {...} @Activate protected void afterPropertiesSet(Map<String,Object> props,   BundleContext bCtx) { ... } @Activate public void dontCallMe(BundleContext bundleContext, Map<String, Object> properties,   ComponentContext componentContext) { ... }

So we can use any method name (although Liferay tends to stick with activate()), any combination of parameters in any order we want.

@Deactivate

Hopefully this is obvious that it is invoked when the component is about to be deactivated. What might not be so obvious is when, exactly, it is called.

Basically you can be sure that your component context is still valid (nothing has been done to it yet), but it is about to happen. You want to use this lifecycle event to clean up before your component goes away.

So if you have a ServiceTracker open, this is your chance to close it. If you have a file open or a DB connection or any resource, use this as your entry point to clean it all up.

Like the @Activate annotation, the method signature for the @Deactivate methods is variable. It supports all of the same parameters as @Activate, but it has an additional value, an int which will hold the deactivation reason.

I've never been worried about the deactivation reason myself, but I suppose there are good use cases for receiving them. If you want to see the codes and explanations, check out this Felix ticket: https://issues.apache.org/jira/browse/FELIX-925

@Modified

So this one is a fun one, one that is not really documented that well, IMHO.

You can find blogs, etc where it boils down to "this method is called when the service context changes", but there is an assumption there that you understand what the service context is in the first place.

The tl;dr version is that this method is invoked mostly when your configuration changes, ala Configuration Admin.

For example, most of us will have some kind of code in our @Activate method to receive the properties for the component (from the @Component properties and also from the Config Admin), and we tend to copy the value we need into our component, basically caching it so we don't have to track it down later when we need it.

This is fine, of course, as long as no one goes to the control panel and changes your Config Admin properties.

When that happens, you won't have the updated value. Your only option in this case is to restart (the service, the component, or the container will all result in your getting the new value).

But that's kind of a pain, don't you think? Change a config, restart the container?

Enter @Modified. @Modified is how you get notified of the changes to the Config Admin properties, either via a change in the control panel or a change to the osgi/configs/<my pid>.config files.

When you have an @Modified annotation, you can update your local cache value and then you won't require a restart when the data changes.

Note that sometimes you'll see Liferay code like:

@Activate @Modified protected void activate(Map<String, Object> properties) { ... }

where the same method is used for both lifecycle events. This is fine, but you have to ensure that you're not doing anything in the method that you don't want invoked for the @Modified call.

For example, if you're starting a ServiceTracker, an @Modified notification will have you restarting the tracker unless you are careful in your implementation.

I often find it easier just to use separate methods so I don't have to worry about it.

The method signature for your @Modified methods follows the same rules as @Activate, so all of the same parameter types are allowed. You can still get to the bundle context or the component context if you need to, but often times this may not be necessary.

Conclusion

So there you kind of have it. There is not a heck of a lot more to it.

With @Activate, you get the lifecycle notification that your component is starting. With @Deactivate, you can clean up before your component is stopped. And with @Modified, you can avoid the otherwise required and pesky restart.

 

 

David H Nebinger 2018-08-25T13:51:00Z
Categories: CMS, ECM

Liferay 7.1 CE GA1 OSGi Service / Portlet / Maven sample

Liferay - Fri, 08/24/2018 - 03:14

Hi everybody,

With the version 7.X we have all started to play with OSGi and for some projects it's sometimes hard to have a clean start with the correct build tool; despite the blade sample which are definitely useful.

I wanted to share through this blog, a sample with an OSGi service and a portlet which are configured to be deployed in a Liferay 7.1.

The code given can be compiled with maven : https://www.dropbox.com/sh/2ohrojsmtuzyev0/AAAqsroA_25aUYx3LEJbHsYsa?dl=0

A repository has to be added into the settings.xml : https://repository.liferay.com/nexus/content/repositories/liferay-public-releases/ to be able to find the version 3.0.0 of the liferay kernel.

 

Let's see some details on the sample given :

1 - OSGi service module

In this module we have an interface called OsgiService and his implementation OsgiServiceImpl.

The OsgiService is into the package xxx.api which is declared as export package into the bnd.

The MANIFEST.MF generated once the compilation is done :

Manifest-Version: 1.0 Bnd-LastModified: 1535096462818 Bundle-ManifestVersion: 2 Bundle-Name: osgi-simple-module Bundle-SymbolicName: com.fr.nixial.osgi-simple-module Bundle-Version: 0.0.1.201808240741 Created-By: 1.8.0_111 (Oracle Corporation) Export-Package: com.fr.nixial.osgi.simple.api;version="0.0.1" Import-Package: com.fr.nixial.osgi.simple.api Private-Package: com.fr.nixial.osgi.simple.impl Provide-Capability: osgi.service;objectClass:List<String>="com.fr.nixi  al.osgi.simple.api.OsgiService" Require-Capability: osgi.ee;filter:="(&(osgi.ee=JavaSE)(version=1.8))" Service-Component: OSGI-INF/com.fr.nixial.osgi.simple.impl.OsgiService  Impl.xml Tool: Bnd-3.2.0.201605172007

 

When the OSGi service module is started in liferay, you will be able to see the service is correctly exposed (felix webconsole view) :

With the Service ID 9245, you can see the type of the service exposed and the implementation below.

 

2 - OSGi portlet

The portlet is a MVCPortlet with jsp where we inject the OsgiService and we called the log method on the portlet default render.

Let's see the MANIFEST.MF of the portlet :

Manifest-Version: 1.0 Bnd-LastModified: 1535097259074 Bundle-ManifestVersion: 2 Bundle-Name: osgi-simple-portlet Bundle-SymbolicName: com.fr.nixial.osgi-simple-portlet Bundle-Version: 0.0.1.201808240754 Created-By: 1.8.0_111 (Oracle Corporation) Import-Package: com.liferay.portal.kernel.portlet.bridges.mvc;version=  "[1.5,2)",javax.portlet;version="[2.0,3)",com.fr.nixial.osgi.simple.a  pi;version="[0.0,1)",com.liferay.portal.kernel.log;version="[7.0,8)",  javax.servlet;version="[3.0,4)",javax.servlet.http;version="[3.0,4)" Private-Package: com.fr.nixial.environment.web,com.fr.nixial.environme  nt.web.model,content Provide-Capability: osgi.service;objectClass:List<String>="javax.portl  et.Portlet",liferay.resource.bundle;bundle.symbolic.name="com.fr.nixi  al.osgi-simple-portlet";resource.bundle.base.name="content.Language" Require-Capability: osgi.extender;filter:="(&(osgi.extender=jsp.taglib  )(uri=http://java.sun.com/portlet_2_0))",osgi.extender;filter:="(&(os  gi.extender=jsp.taglib)(uri=http://liferay.com/tld/aui))",osgi.extend  er;filter:="(&(osgi.extender=jsp.taglib)(uri=http://liferay.com/tld/p  ortlet))",osgi.extender;filter:="(&(osgi.extender=jsp.taglib)(uri=htt  p://liferay.com/tld/theme))",osgi.extender;filter:="(&(osgi.extender=  jsp.taglib)(uri=http://liferay.com/tld/ui))",osgi.extender;filter:="(  &(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))",  osgi.service;filter:="(objectClass=com.fr.nixial.osgi.simple.api.Osgi  Service)";effective:=active,osgi.ee;filter:="(&(osgi.ee=JavaSE)(versi  on=1.8))" Service-Component: OSGI-INF/com.fr.nixial.environment.web.OsgiSimplePo  rtlet.xml Tool: Bnd-3.2.0.201605172007

 

In the Apache Felix webconsole, we can see the OsgiService is a service used by the portlet and satisfy by the osgi simple module (service #9245) :

 

I hope this quick tutorial will help some of you to your next development on Liferay 7.X.

Feel free to add comments.

 

Best regards,

David.

David Bougearel 2018-08-24T08:14:00Z
Categories: CMS, ECM

Delivering Localised Documents & Assets

Liferay - Mon, 08/20/2018 - 09:33

A customer recently asked me how Liferay can help them deliver documents to their users based on user-preferred language. One of the easiest ways to do this is through Liferay’s out-of-the-box web content localisation feature. In this  blog entry, I will show how this can be easily implemented.

 

1. The first step is to create a Web Content structure with fields for our web content and documents. Make sure the “Documents and Media” field is set to be repeatable so that we can add multiple documents.

 

 

2.  Create a template with this code. Note that the field names from my structure are HtmlField and DocumentsAndMediaArray and that I am using Lexicon Experience language to display the documents as cards.

 

3. Now that we’ve created our structure and template, it’s time to start adding the web content and documents.

 

4. In this example I am going to add a Spanish version of my web content, as well as Spanish versions of my documents. Click on the language selector button to start this process.

 

5. Once the Web Content article has been translated and published, all that remains now is to drop it on a page. It will appear as follows.

 

6. When it comes to delivering content based on a user’s preferred language, there are a number of options, such as picking up the preferred language in the user’s Liferay profile or web browser. Here, I am allowing the user to manually select their preferred language from a Liferay Language Selector widget placed on the page.

 

7. Clicking on the Spanish flag gives us the following.

 

And that’s all there is to it! This example is all you need to allow non-technical users to add localised documentation to a website. It should also serve as a good starting point to address more elaborate business requirements.

 

 

John Feeney 2018-08-20T14:33:00Z
Categories: CMS, ECM

Finding Bundle Dependencies

Liferay - Sun, 08/19/2018 - 15:23
Introduction

So many times I have answered the question, "How do I find out what import packages my bundle needs?" with the tired and unsatisfactory response that uses the following process:

  1. Build your module.
  2. Deploy your module.
  3. Use Gogo to see why your bundle doesn't start, often from an Unresolved Requirement on an Import Package.
  4. Either include the lib in your jar or use the Import-Package bnd.bnd property to exclude the package.
  5. Go to  step 1, repeat until no further Unresolved Requirements are found.

Yeah, so this is really a pain, but it was the only way I knew of how to see what the imports are that your module needs.

Until Today.

Introducing Bnd

The Bnd tool (available https://bnd.bndtools.org/) is the heart of building an OSGi bundle.  Whether you're using Blade, Gradle, Maven, etc it doesn't matter; under the covers you are likely invoking Bnd to build the module.

Most of us don't know about Bnd only because we're lazy developers. Well okay, maybe not lazy, but we're only going to learn new tools if we have to. But if we can do our jobs without knowing every detail of the process, we're generally fine with it.

It is Bnd which is responsible for applying the directives in your bnd.bnd file and generating the bundle full of the OSGi details necessary for your bundle to deploy and run.

As it turns out, the Bnd tool knows a heck of a lot more about our bundles than we do.

To find this out, though, we need the Bnd tool installed.

Follow the instructions from https://bnd.bndtools.org/chapters/120-install.html on 3.1 to install the command line client.

Bnd Printing

The command line tool actually gives you a full basket of tools to play with, you can find the whole list here: https://bnd.bndtools.org/chapters/860-commands.html

Honestly I have not really played with many of them yet.  I surely need to because there are definitely useful nuggets of gold in them there hills.

For example, the one nugget I've found so far is the print command.

I'm talking specifically about using bnd print --impexp to list imports and exports as described https://bnd.bndtools.org/chapters/390-wrapping.html in section 20.3.

Turns out this command will list the imports and exports Bnd has identified for your module.

I turned this on one of my own modules to see what I would get:

bnd print --impexp build/libs/com.example.hero.rules.engine.simple-1.1.0.jar [IMPEXP] Import-Package com.example.data.model {version=[1.0,2)} com.example.hero.rules.engine {version=[1.1,2)} com.example.hero.rules.model {version=[1.0,2)} com.example.hero.rules.service {version=[1.0,2)} com.liferay.portal.kernel.log {version=[7.0,8)} com.liferay.portal.kernel.util {version=[7.3,8)} com.liferay.portal.kernel.uuid {version=[6.2,7)} javax.xml.datatype javax.xml.namespace javax.xml.parsers org.w3c.dom org.w3c.dom.bootstrap org.w3c.dom.ls org.xml.sax

Cool, huh? I can see that my module wants to import stuff that I have defined off in the API module, but I can also see that I'm leveraging portal-kernel as well as XML processing.

POI Portlet

One of the frequent requests is what is necessary to use POI in a portlet. Let's find out together, shall we?

So I created a simple module using Blade and use the following build.gradle file:

dependencies { compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0" compileOnly group: "com.liferay.portal", name: "com.liferay.util.taglib", version: "2.0.0" compileOnly group: "javax.portlet", name: "portlet-api", version: "2.0" compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1" compileOnly group: "jstl", name: "jstl", version: "1.2" compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0" compileInclude group: 'org.apache.poi', name: 'poi-ooxml', version: '3.17' }

The only thing I added here was the compileInclude directive. As we all know, this will automagically include the declared dependency and some of the transitive dependencies in the bundle. But what many of us have seen, if you deploy this guy you would still get Unresolved Reference messages.

Well, using Bnd we can now see why that is:

bnd print --impexp build/libs/com.example.poi-1.0.0.jar [IMPEXP] Import-Package com.liferay.portal.kernel.portlet.bridges.mvc {version=[1.0,2)} com.microsoft.schemas.office.powerpoint com.microsoft.schemas.office.word com.sun.javadoc com.sun.tools.javadoc javax.crypto javax.crypto.spec javax.imageio javax.imageio.metadata javax.imageio.stream javax.portlet {version=[2.0,3)} javax.security.auth.x500 javax.servlet {version=[3.0,4)} javax.servlet.http {version=[3.0,4)} javax.swing javax.xml.bind javax.xml.bind.annotation javax.xml.bind.annotation.adapters javax.xml.crypto javax.xml.crypto.dom javax.xml.crypto.dsig javax.xml.crypto.dsig.dom javax.xml.crypto.dsig.keyinfo javax.xml.crypto.dsig.spec javax.xml.parsers javax.xml.transform javax.xml.transform.dom javax.xml.transform.stream javax.xml.validation javax.xml.xpath junit.framework org.apache.commons.logging org.apache.crimson.jaxp org.apache.jcp.xml.dsig.internal.dom org.apache.poi.hsmf org.apache.poi.hsmf.datatypes org.apache.poi.hsmf.extractor org.apache.poi.hwpf.extractor org.apache.tools.ant org.apache.tools.ant.taskdefs org.apache.tools.ant.types org.apache.xml.resolver org.apache.xml.resolver.tools org.apache.xml.security org.apache.xml.security.c14n org.apache.xml.security.signature org.apache.xml.security.utils org.bouncycastle.asn1 org.bouncycastle.asn1.cmp org.bouncycastle.asn1.nist org.bouncycastle.asn1.ocsp org.bouncycastle.asn1.x500 org.bouncycastle.asn1.x509 org.bouncycastle.cert org.bouncycastle.cert.jcajce org.bouncycastle.cert.ocsp org.bouncycastle.cms org.bouncycastle.cms.bc org.bouncycastle.operator org.bouncycastle.operator.bc org.bouncycastle.tsp org.bouncycastle.util org.etsi.uri.x01903.v14 org.junit org.junit.internal org.junit.runner org.junit.runner.notification org.openxmlformats.schemas.officeDocument.x2006.math org.openxmlformats.schemas.schemaLibrary.x2006.main org.w3c.dom org.w3c.dom.events org.w3c.dom.ls org.xml.sax org.xml.sax.ext org.xml.sax.helpers Export-Package com.example.poi.constants {version=1.0.0}

Nuts!

Now we can see just what OSGi is going to want us to deal with. We can add the compileInclude directives for these artifacts if we want to include them, or we could mask them using the ! syntax for the bnd.bnd Import-Package directive, or even mark them as optional by listing them in the Import-Package directive with the resolution:=optional instruction, ala:

Import-Package:\ ...\ org.apache.tools.ant.*;resolution:=optional,\ ... Conclusion

During my testing, I did find that this is not the perfect solution. Not even the Bnd tool will process transitive dependencies correctly.

For example, we can see from above that BouncyCastle is imported, but what we can't see are any transitive dependencies to BouncyCastle that might be lurking if we decide to include BouncyCastle in. We would have to re-run the bnd print --impexp command again, but that will still be better than the old tired answer I used to have to give.

Enjoy!

David H Nebinger 2018-08-19T20:23:00Z
Categories: CMS, ECM

New lessons on Liferay University

Liferay - Fri, 08/17/2018 - 03:00

As promised less than a month ago, we're working on more content for Liferay University. Meet your new professors Charles Cohick and Dimple Koticha.

The new lessons are

As with all lessons on Liferay University, they're completely free and available after logging in with your liferay.com account.

But wait, there's more...

Learn as much as you can

For a limited time, Liferay University Passport, our flat rate for each and every course on Liferay University, is available for an introductory price at almost 30% discount.  And even those courses aren't finished yet: There are more to come. So, get it while you save that much. With the one-time payment for a personal passport, even the paid courses are free and you have a full year time to take them all.

Prefer a live trainer in the room?

Of course, if you prefer to have a live trainer in the room: The regular trainings are still available, and are updated to  contain all of the courses that you find on Liferay University and Passport. And, this way (with a trainer, on- or offline) you can book courses for all of the previous versions of Liferay as well.

And, of course, the fine documentation is still available and updated to contain information about the new version already.

(Photo: CC by 2.0 Hamza Butt)

Olaf Kock 2018-08-17T08:00:00Z
Categories: CMS, ECM

Liferay CE 7.x / Liferay DXP 7.x Java Agents

Liferay - Wed, 08/15/2018 - 10:38

A SysAdmin came up to me and said he was having issues starting Liferay DXP 7.0, a bunch of CNFEs were coming up at startup.

I found that they were set up to use Wily for monitoring their JVMs, and it was those classes that were generating CNFEs.

In general, when you add the -Djavaagent=XXX parameter onto your app server's startup command, you're enabling an agent which will have full access inside of the JVM, but only as long as the class loader hierarchy is available. The classes in the agent are injected into the highest point of the class loader hierarchy so they are normally visible across the entire app server.

Except, of course, for Liferay's OSGi container.

Liferay takes great care when creating the OSGi container to limit the "pollution" of the OSGi container's class loader to prevent classes from the app server leaking in as global classes in the OSGi container.

For monitoring agents, though, the agent packages will not be available within OSGi even though the agent is still going to try to inject itself into object instantiation.

This leads to all of the ClassNotFoundExceptions for missing packages/classes during startup.

Enabling Agent Monitoring in OSGi

We can actually enable the agent inside of the OSGi container, but it takes an additional configuration step.

In our portal-ext.properties file, we need to add the packages from the agent to the module.framework.properties.org.osgi.framework.bootdelegation property.

Note here that I said "add".  You can't just say module.framework.properties.org.osgi.framework.bootdelegation=com.agent.* and think it is going to work out because that strips out all of the other packages Liferay normally passes through the boot delegation.

To find your list, you need your portal.properties file or access to the portlet properties panel in the System Administration control panel (or from Github or from your copy of your DXP source or ...).  Using the existing value as the guide, you'll end up with a value like:

module.framework.properties.org.osgi.framework.bootdelegation=\ __redirected,\ com.liferay.aspectj,\ com.liferay.aspectj.*,\ com.liferay.portal.servlet.delegate,\ com.liferay.portal.servlet.delegate*,\ com.sun.ccpp,\ com.sun.ccpp.*,\ com.sun.crypto.*,\ com.sun.image.*,\ com.sun.jmx.*,\ com.sun.jna,\ com.sun.jndi.*,\ com.sun.mail.*,\ com.sun.management.*,\ com.sun.media.*,\ com.sun.msv.*,\ com.sun.org.*,\ com.sun.syndication,\ com.sun.tools.*,\ com.sun.xml.*,\ com.yourkit.*,\ sun.*,\ com.agent.*

See how I tacked on the "com.agent.*" at the end of the list?

You'll of course change the "com.agent" stuff to match whatever package your particular agent is using, but hopefully you get the idea.

David H Nebinger 2018-08-15T15:38:00Z
Categories: CMS, ECM

Extending Liferay DXP - User Registration (Part 2)

Liferay - Mon, 08/13/2018 - 02:24

This is the second part of the "Extending Liferay DXP - User Registration" blog. In this blog we explore the ways of implementing registration process for a portal with multiple sites

Portal Sites Configuration

Let’s presume we have a portal with the following sites configured:

  • "Liferay", default site, will not be listed
  • "Site 1", site with open membership
  • "Site 2", site with restricted membership
  • "Site 3", site with restricted membership
  • "Site 4", private site, will not be listed

Each of the sites has its own description which we want to display to the user:

 

User Registration Process Flow

The main steps of the user registration process that we are going to implement here are:

  1. Check if a user already has an account 
  2. If user already has an account but is not signed in, ask to sign in
  3. If user is already signed in, show current read-only details of a user
  4. Show the sites which is the user is not a member of with the description of the site when the site is selected
  5. Collect additional information from the user if the user has selected a 'restricted' site
  6. User reviews the request, with the ability to save request as PDF file, and submits the form
  7. On form submission:
    1. Automatically create user account if user does not exist
    2. If the site selected is 'open', user is automatically enrolled into this site
    3. If the site selected is 'restricted', site membership request is created for this user
    4. If the site selected is 'restricted', notification email is sent to site owners with user details attached as PDF

 

For the implementation of this process we use SmartForms.

Here we show only essential screenshots of the User Registration Form, the entire form definition can be downloaded (see link at the end of this blog). Once it is imported you can use Visual Designer to see how the business rules are implemented.

 

User Flow

1. User is asked to enter an email, SmartForms connects to portal webservices to check if such email is already registered (the source code for all webservices is at the end of the blog)

2. If user already has an account, then 'must log in' message is displayed

3. If user is already signed in, the form is automatically populated with user details (user data is automatically brought by SmartForms from Liferay environment).

4. On the next page the user is asked to select a site from the site list obtained via webservice. When user selects a site the description of the site is displayed (webservice again). You can put your own definition of Terms & Conditions together with 'I agree' checkbox.

.

5. If the site the user selected is of 'restricted' type, the user is asked to provide additional information.

SmartForms will automatically handle this rule once you add the following state condition to 'Company/Organisation Details' page. Visual Designer screenshot:

6. The membership request summary is displayed on a final page, allowing the user to save membership request details as PDF file and Submit the form when ready.

7. Processing the submitted form. There are two form submission handlers :

  1. Emailer, which sends a 'membership site request' email to Site Owner(s) if the selected site is 'restricted', attaching PDF that contains all data submitted by the user (implemented using SmartForms workflow, but of course you can use your own).
  2. Webhook, that creates user account if the user is not registered yet and submits membership request of this user to a selected site. (the source code for this webhook  is at the end of the blog).

That's it. That is the user flow, which is probably longer than the implementation notes listed below.

Implementation Notes Data Flow

Below is the generic data flow where the Form Controller could be implemented inside or outside of the portal.

In our implementation we are using SmartForms to build and run User Registration Form. SmartForms Cloud is acting as Form Controller.

 

User Registration Handler Code

User registration handler implements:

  1. SOAP webservices to query portal data and populate form fields
  2. JSON REST endpoint to executes all relevant actions on form submission, creation of portal users and site membership requests

Here we provide source code for essential functions, link to get the full source code is at the end of the blog.

 

Checking if User is Registered

@WebMethod (action=namespace +  "getMemberStatusByEmail" )

public String getMemberStatusByEmail(

         @WebParam (name= "fieldName" )

         String fieldname,

         @WebParam (name= "fields" )

         Field[] fieldList) {

     try {

     

         Map<String, Field> fieldMap = fieldArrayToMap(fieldList);

         Field emailAddressVO = fieldMap.get(FIELDNAME_USER_EMAIL);

         if (emailAddressVO ==  null ) {

             logger.warn( "Call to getMemberStatusByEmail() is misconfigured, cannot find field '" + FIELDNAME_USER_EMAIL +  "'" );

             return "false" ;

         }

         String emailAddress = emailAddressVO.getValue();

         

         if (emailAddress.trim().length() ==  0 ) {

             return "false" ;

         }

             

         try {

             UserLocalServiceUtil.getUserByEmailAddress( this .getCompanyId(fieldMap), emailAddress);

             // no exception, user exists

             return "true" ;

         }  catch (Exception e) {}

         // user is not registered

         return "false" ;

     }  catch (Throwable e) {

         logger.error( "System error " , e);

         return "false" ;

     }

}

 

Getting List of Open and Protected Sites

@WebMethod (action=namespace +  "getSites" )

public Option[] getSites(

         @WebParam (name= "fieldName" )

         String fieldname,

         @WebParam (name= "fields" )

         Field[] fieldList) {

     Option[] blankOptions =  new Option[ 0 ];

     try {      

         Map<String, Field> fieldMap = fieldArrayToMap(fieldList);        

         long companyId =  this .getCompanyId(fieldMap);          

         User user =  null ;

         Field userScreennameField = fieldMap.get(FIELDNAME_USER_SCREENNAME);

         if (userScreennameField !=  null ) {

             if (userScreennameField.getValue().trim().length() >  0 ) {

                 try {

                     user = UserLocalServiceUtil.getUserByScreenName(companyId, userScreennameField.getValue());

                 }  catch (Exception e) {}

             }

         }

         

         List<Option> validGroups =  new ArrayList<Option>();

         LinkedHashMap<String, Object> params =  new LinkedHashMap<String, Object>();

         // limit selection by sites only

         params.put( "site" ,  new Boolean( true ));

         // limit selection by active groups only

         params.put( "active" ,  new Boolean( true ));

         List<Group> allGroupsList = GroupLocalServiceUtil.search(companyId, params , QueryUtil.ALL_POS, QueryUtil.ALL_POS);

         Iterator<Group> allActiveGroups = allGroupsList.iterator();

         while (allActiveGroups.hasNext()) {

             Group group = allActiveGroups.next();

             boolean isAlreadyAMember =  false ;

             // check if user is already a member of it

             if (user !=  null ) {

                 if (group.isGuest()) {

                     // is a member anyway

                     isAlreadyAMember =  true ;

                 }  else {

                     isAlreadyAMember = UserLocalServiceUtil.hasGroupUser(group.getGroupId(), user.getUserId());

                 }

             }

             // add the site to the selection list if this is a regular community site and the user is not already a member of it

             if (group.isRegularSite() && !group.isUser() && !isAlreadyAMember && !group.isGuest()) {

                 // include Open and Restricted sites only

                 if (group.getType() ==  1 || group.getType() ==  2 ) {

                     validGroups.add(  new Option( group.getName(group.getDefaultLanguageId()), String.valueOf(group.getGroupId()) ) );

                 }

             }

         }

         return validGroups.toArray( new Option[validGroups.size()]);

     }  catch (Throwable e) {

         logger.error( "System error " , e);

         return blankOptions;

     }

}

 

Getting Email Addresses of Owners of a Site

@WebMethod (action=namespace +  "getSiteOwnerEmails" )

public String getSiteOwnerEmails(

         @WebParam (name= "fieldName" )

         String fieldname,

         @WebParam (name= "fields" )

         Field[] fieldList) {

     Map<String, Field> fieldMap = fieldArrayToMap(fieldList);

     Group group =  this .getSelectedGroup(fieldMap);

     if (group ==  null ) {

         // no group selected yet

         return "" ;

     }  else if (group.getType() !=  2 ) {

         // this is not a restricted site

         return "" ;

     }  else {

         // check if Terms and Conditions acknowledge is checked, otherwise no point of fetching email addresses

         Field termsAndConditionsField = fieldMap.get(FIELDNAME_TnC_CHECKBOX);

         if (termsAndConditionsField ==  null ) {

             logger.warn( "Call to getSiteOwnerEmails() is misconfigured, cannot find field '" + FIELDNAME_TnC_CHECKBOX +  "'" );

             return "" ;

         }

         if (termsAndConditionsField.getValue().length() ==  0 ) {

             // not checked

             return "" ;

         }

         // make a list of email addresses of site owners for a restricted site

         // this will be used to send 'site membership request' email

         StringBuilder response =  new StringBuilder();

         Role siteOwnerRole;

         try {

             siteOwnerRole = RoleLocalServiceUtil.getRole(group.getCompanyId(), RoleConstants.SITE_OWNER);

         }  catch (PortalException e) {

             logger.error( "Unexpected error" , e);

             return "" ;

         }

         List<User> groupUsers = UserLocalServiceUtil.getGroupUsers(group.getGroupId());

         for ( int i =  0 ; i < groupUsers.size(); i++) {

             User user = groupUsers.get(i);

             if (UserGroupRoleLocalServiceUtil.hasUserGroupRole(user.getUserId(), group.getGroupId(), siteOwnerRole.getRoleId())) {

                 if (response.length() >  0 ) {

                     response.append( ';' );

                 }

                 response.append(user.getEmailAddress());

             }

         }

         logger.info( "compiled site admin emails " + response.toString());

         return response.toString();

     }

}

 

Creating User Account and Site Membership Request

// method to process JSON webhook call

@POST

@Path ( "/user-registration" )

public void newFormSubmittedAsJsonFormat(String input) {

     logger.info( "In /webhook/user-registration" );

     

     /* check authorization */

     String handShakeKey = request.getHeader( "X-SmartForms-Handshake-Key" );

     if (handShakeKey ==  null || !handShakeKey.equals(SMARTFORMS_HADSHAKE_KEY) ) {

         throw new WebApplicationException(Response.Status.UNAUTHORIZED);           

     }

     

     JSONObject data;

     try {

         data = JSONFactoryUtil.createJSONObject(input);

         

         Map<String, String> fields =  this .jsonFormDataToFields(data);

         logger.info( "Have received fields " + fields.size());

         User user =  null ;

         

         ServiceContext serviceContext =  new ServiceContext();

         long groupId = Long.parseLong(fields.get(FIELD_SITE_ID));

         long companyId = Long.parseLong(fields.get(FIELD_COMPANY_ID));

         if (fields.get(FIELD_USER_ID).length() >  0 ) {

             logger.info( "User is already registered" );

             try {

                 user = UserLocalServiceUtil.getUser(Long.parseLong(fields.get(FIELD_USER_ID)));

             }  catch (Exception e) {

                 logger.error( "Unable to fetch user" , e);

                 throw new WebApplicationException(Response.Status.NOT_FOUND);

             }

         }  else {

             // create user

             String firstName = fields.get(FIELD_USER_FIRST_NAME);

             String lastName = fields.get(FIELD_USER_LAST_NAME);

             String email = fields.get(FIELD_USER_EMAIL);

             logger.info( "Creating user " + firstName +  " " + lastName +  " " + email);

             try {

                 // the following data could come from the form, but we just provide some hard-coded value

                 long groups[] =  new long [ 0 ];

                 if (!fields.get(FIELD_SITE_TYPE).equals( "restricted" )) {

                     // this is an open group, add it to the list

                     groups =  new long [ 1 ];

                     groups[ 0 ] = groupId;

                 }

                 

                 long blanks[] =  new long [ 0 ];

                 boolean sendEmail =  false ;

                 Locale locale = PortalUtil.getSiteDefaultLocale(groupId);

                 boolean male =  true ;

                 String jobTitle =  "" ;

                 long suffixId =  0 ;

                 long prefixId =  0 ;

                 String openId =  null ;

                 long facebookId =  0 ;

                 String screenName =  null ;

                 boolean autoScreenName =  true ;

                 boolean autoPassword =  true ;

                 long creatorUserId =  0 ;

                 user = UserLocalServiceUtil.addUser(

                         creatorUserId, companyId,

                         autoPassword,  null ,  null ,

                         autoScreenName, screenName , email,

                         facebookId, openId, locale,

                         firstName,  "" , lastName,

                         prefixId, suffixId, male,

                         1 ,  1 ,  2000 ,

                         jobTitle,

                         groups, blanks, blanks, blanks,

                         sendEmail ,

                         serviceContext);

             }  catch (Exception e) {

                 logger.error( "Unable to create user" , e);

                 throw new WebApplicationException(Response.Status.INTERNAL_SERVER_ERROR);

             }

         }

         

         if (fields.get(FIELD_SITE_TYPE).equals( "restricted" )) {

             try {

                 MembershipRequestLocalServiceUtil.addMembershipRequest(user.getUserId(), groupId,

                         "User has requested membership via User Registration Form, you should have received an email" , serviceContext);

             }  catch (PortalException e) {

                 logger.error( "Unable ot add membership request" );;

             }

         }      

     }  catch (JSONException e) {

         logger.error( "Unable to create json object from data" );

         throw new WebApplicationException(Response.Status.BAD_REQUEST);

     }

}  

 

private Map<String, String> jsonFormDataToFields(JSONObject data) {

     Map<String, String> map =  new HashMap<String, String>();

     JSONArray fields = data.getJSONArray( "fields" );

     for ( int i =  0 ; i < fields.length(); i++) {

         JSONObject field = fields.getJSONObject(i);

         logger.info(field.toJSONString());

         map.put(field.getString( "name" ), field.getString( "value" ));

     }

     return map;

}

 

Full source code for Form Handler project can be downloaded from here:

http://extras.repo.smartfor.ms/blogs/Extending-Liferay-User-Registration/userRegistration-source.zip

 

To make it work on your Liferay installation using SmartForms you will need to:

One more thing, in SmartForm webhook configuration please change the localhost to URL of your portal:

Feel free to ask questions if you run into problems ...

Victor Zorin 2018-08-13T07:24:00Z
Categories: CMS, ECM

How to upgrade my sharded environment to Liferay 7.x?

Liferay - Fri, 08/10/2018 - 02:35

Hi Liferay Community,

Before responding this question I would like to explain what's sharding first: to overcome the horizontal scalability concerns of open source databases at the time (circa 2008), Liferay implemented physical partitioning support.  The solution allowed administrators to configure portal instances to be stored in different database instances and database server processes.

This feature was originally named "sharding" although "data partitioning" is more accurate since it requires a small amount of information sharing to coordinate partitions.

Thus, beginning in 7.0, Liferay removed its own physical partitioning implementation in favor of the capabilities provided natively by database vendors. Please, notice that logical partitioning via the "portal instance" concept (logical set of data grouped by the companyId column with data security at portal level) is not affected by this change and it's available in current Liferay versions.

Having explained this, the answer to this question is simple, just the follow the official procedure to do it:
https://dev.liferay.com/discover/deployment/-/knowledge_base/7-0/upgrading-sharded-environment

So Liferay 7.x provides a process which will convert all shards in independent database schemas after the upgrade. This can be suitable for thoses cases where you need to keep information separated for legal reasons. However if you can not afford to maintain one complete environment for every of those independent databases you could try another approach: disable staging by merging all shards into just one database schema before performing the upgrade to Liferay 7.x.

The option of merging all shard schemas into the default one is feasible because sharding generates unique ids per every row among all databases. These are the steps you should follow to achieve this:

  1. Create a backup for the shard database schemas in the production environment.
  2. Copy the content of every table in the non default shards into the default shard. It's recommended to create an SQL script to automate this process.
  3. If a unique index is violated, analyze the data for the two records which cause the issue and remove one of them since it's not necessary anymore (different reasons could cause the creation of data in the incorrect shard in the past such as wrong configuration, a bug, issues with custom developments, etc.)
  4. Resume this process from the last point of failure.
  5. Repeat 3 and 4 until the default shard database contains all data from the other shards.
  6. Clean up the Shard table except for the default shard record.
  7. Startup a Liferay server using this database without the sharding portal.properties:
    1. Remove all database connections except for the default one.
    2. Comment the line META-INF/shard-data-source-spring.xml in the spring.configs property.
  8. Ensure that everything works well and you can access to the different portal instances. 

It is recommended that you keep record of the changes made in the step 3 and 6 since you will need to repeat this process once you decide to go live after merging all databases in the default shard. It is also advisable to do this as a separate project before performing the upgrade to Liferay 7.x. Once you have completed this process you will just need to execute the upgrade as a regular non-shared environment:
https://dev.liferay.com/en/discover/deployment/-/knowledge_base/7-1/upgrading-to-liferay-71

This alternative to upgrade sharded environments is not officially supported but it has been executed succesfully in a couple of installations. For that reason, if you have any question regarding it please write a comment in the this blog entry or open a new thread in the community forums, other members of the community and I will try to assist you during this process.

Alberto Chaparro 2018-08-10T07:35:00Z
Categories: CMS, ECM
Syndicate content