AdoptOS

Assistance with Open Source adoption

Liferay
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
  • warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/mykidsin/public_adoptos_html/includes/unicode.inc on line 349.
Syndicate content
Liferay Community
Updated: 5 hours 58 min ago

Analyzing Activity in Liferay Forums 

Tue, 02/05/2019 - 04:14

Hello Liferay Community Members!

From 2007 – 2018, over 30,000 members made over 130,000 comments in the Liferay community forums. As the community discussions have recently moved to a new platform, this is a good time to analyze the great exchange of information over the past 12 years. 

In this article, I use data science (Python Pandas and Python’s Natural Language Toolkit  - NLKT) on the Liferay forums to show how analyzing forum data can increase your understanding of your own online communities and enable you to make better operational and strategic decisions. 

DATA DESCRIPTION

I started with a raw file of all of the forum data. With some data cleaning, I created a file containing the details of every forum message. 

To visualize the final 5 rows of the file, I used the command forums.tail(5) .  

Note: No names or personal details appear here. The original forum postings and member profiles are public, so this is an extra cautionary step.

Let's explore the data to learn more about this member journey.

EXPLORING THE DATA WITH FUNCTIONS AND PLOTS: ANALYZING THE MEMBER JOURNEY

Understanding the member journey over time allows us to understand how users evolve and how to best engage them during this process.

To reproduce the questions and answers of a single discussion thread, I used forums.loc[forums.threadId==103912609].head(5) to filter by the “threadId”, which produced the following results:

It appears that experienced members are trying to assist the newbies.  Indeed, a central goal of the forums is to help members advance in their journey from novices to experts.

As Jamie Sammons, Developer Advocate at Liferay, Inc. says:

“Most contributors usually being within the Liferay community as a consumer reading the documentation, reading the forums and Slack.  For many the first step to active involvement is to receive help within the forums by asking questions specific to their environments. Most contributors who begin helping others usually start this way and then when they feel they are learning the ropes then sometimes even feel obligated to help others to pay back the support they received.”

ANALYZING MEMBER LIFESPANS

Do typical members just post once, or do they remain active over a longer period?  I calculated how long the users remained active by grouping each user’s posts together and then computing the difference in time between the first and last post. I only included members who started posting before 2015, since the newest members are likely still active.

I used a powerful “groupby” feature in Python to create this data frame. You can see the details in GitHub; an example of my results appears below. What was the distribution of members’ lifespans? The plot below demonstrates that almost 800 members had a life span of 1 day, with a large drop off in frequency thereafter.   I was also interested in understanding the veteran members. I zeroed in on the veterans’ lifespans by changed the scale on the graph above from days to years.  This chart shows that 1800 members had forum lifespans of 1-2 years. The bottoming out after 2 or 4 years is not surprising, as developers often have Liferay projects that last 2-4 years, after which they move on to other projects or companies. As Jamie Sammons, Developer Advocate at Liferay, Inc. says: “Most of the time what happens is the developer works for a consulting company and moves on to a new project or the developer may change jobs altogether.  In a few cases the developer works for a SI or partner company and they simply have a massive workload due to good business and simply cannot find time to contribute.”   With this data in hand, community managers can initiate a special outreach campaign after a member writes 1 post, and send personalized messages to new members, encouraging continuing engagement. Perhaps they could even award new members points in a “gamification” system to foster further interaction.  USING NATURAL LANGUAGE PROCESSING (NLP) TO UNDERSTAND THE COMMUNITY Word clouds are visualizations of content, and they can be used to improve members’ experiences in the community.  For example, posts on the topics identified through the word cloud can appear in the members’ activity feed, making the feed more relevant and engaging. It is helpful to understand the popular topics of the entire community as well. The chart below contains the word counts from the subject lines of every discussion in Liferay’s “Announcements” and “Development” forum categories.   The “?" and word "how" that appear in the Development category makes sense, as members typically ask other members technical or product-related questions (the trigram below further corroborates this).   In the Announcements category, the “!” often appears, indicative of members celebrating successes (e.g., “Congrats on the sale!).   Development Category Trigrams: NLP can also help with routine, but essential, community management operations.  In “What’s Next for the Liferay Community,” Liferay’s CEO describes one of their community management challenges:   “Although [the] numbers can show how vibrant this community is, we know we can do better, especially when it comes to the number of unanswered questions.” Using NLP, one approach is to optimize the expert assignments by category. Using the code below, I explored common words in the large “development” category that could be used to form smaller, more management discussion categories.   long_words = [word for word in words if               len(word) > 2 and word not in stoplist] fdistLong = nltk.FreqDist(long_words) fdistLong.most_common(50) It is clear that there are some good candidates here for topics that can be separated out to form new sub-groups: 1.    Portlet  2.    JSP  3.    Builder 4.    Theme  5.    Database Lexical dispersion plots are a way to visualize trending topics in the forums, allowing community managers to strategically introduce new subjects or create new discussions on existing popular issues.  The below plot indicates that the community talked intensely about “6.1” (presumably, a version of Liferay), with an overlap of “6.1” and “6.2” as “6.2” was released. mytext = nltk.Text(words)  mytext.dispersion_plot(["6.1","6.2"]) Below is an analysis of some other topics in the top 50 list: mytext.dispersion_plot(["jsp","database","struts","image"]) The above functions and visualizations are just the tip of the iceberg; there is much more that can be discovered by analyzing user behavior and the content itself. Yet even the relatively basic but thoughtful data science techniques I have demonstrated can help you strategically analyze and improve your community management.     About the author:   Adam Zawel is Vice President of Strategy with Leader Networks – a research and consulting firm. Adam Zawel 2019-02-05T09:14:00Z

Categories: CMS, ECM

Never get into trouble with 'Unresolved requirement' again

Tue, 02/05/2019 - 00:02

       If you are a new to liferay 7.x platform since liferay had migrate to OSGi platform, you must have meet the error Unresolved requirement: Import-Package  when deploying your bundles, i have a easy solution for you —— 'Take a look at your bundle's MANIFEST.MF'.

      When you deploying your bundle to liferay portal, the OSGi container will resolve your bundle's MANIFEST.MF that contains all the dependencies relationship. If your MANIFEST file have declare a import package that doesn't exist in OSGi runtime or can't be found by OSGi classloader, you will get this ` Unresolved requirement ` message. I have some use cases for you.

1. If you want to import a third-party dependency, you may consider compileOnly  or compile  in the first time. But i'm sorry that it only help you to pass the java compile time, however, OSGi knows nothing about to find your dependency during it's runtime. Then we introduce compileInclude  configuration in gradle plugin.

it will help you to package the `compileInclude` dependencies into the jar's lib   folder and set a Bundle-ClassPath  property to MANIFEST, then the OSGi classloader can locate it.

2. You may encounter some weird problems when using some specific dependencies.

dependencies {

      compileInclude group: 'org.apache.httpcomponents', name: 'httpclient', version: '4.5.6'

}

But you will get a error message :

Unresolved requirement: Import-Package: org.apache.avalon.framework.logger

Why it would happen since I never introduce this? Remember the solution i said? Take a look at your jar's MANIFEST.MF, you will find that it actually contains the import-package org.apache.avalon.framework.logger . It was introduced because the http client dependency that has import this package thus the bnd tool(The OSGi bundle builder) will automatic add this package. so you can exclude this import package in bnd.bnd.

Import-Package: \

  !*avalon*,\

  \

  *

3.The Unresolved import package also could happen on when you haven't pulling third-party dependencies that i met before. If you are developing a fragment bundle(such as  com.liferay.portal.search.web  ) to override the liferay bundle, and the file /META-INF/resources/custom/facet/view.jsp   to modify, although you haven' t modify this file , you can still get the unresolved import error

`Unresolved requirement: Import-Package: com.liferay.portal.search.web.internal.custom.facet.display.context`

this package is a internal package that means it don't want to be exposed to outside, and your fragment just want to import that. so your can safely exclude this import package in bnd.bnd, and the MANIFEST won't contain this import declaration. Will this result in a NoClassDefFoundError  during runtime? No, it won't, because the host bundle and the fragment bundle shared the same class loader to load this package's class.

Below is the order about OSGi load class, I hope it could help you to understand the resolving process:

Charles Wu 2019-02-05T05:02:00Z
Categories: CMS, ECM

Liferay 7.2 Milestone 2 Release

Fri, 02/01/2019 - 18:46

The main purpose of this release is give you a sneak peak of what’s coming in 7.2 and to get an early start on testing our packaging process. It is by no means feature complete and hasn’t been tested for quality.

The Liferay 7.2 Community Beta Program is scheduled to start with the release of 7.2 Alpha 1 in the coming weeks. Similar to 7.1, the 7.2 beta program will be divided into two phases with the first phase being focused on receiving feedback on new features included in the Alpha builds. The second phase will be a bug hunt based on the 7.2 betas. Although M2 does not have a formal beta program, it is a good chance to see what’s being worked on in the release.

Download Milestone 2 now! See you soon with details on the beta program!   Jamie Sammons 2019-02-01T23:46:00Z
Categories: CMS, ECM

Liferay 7 SSO using OpenId Connect

Wed, 01/30/2019 - 11:57

I just completed a project that integrated Liferay 7.0 GA7 with Keycloak 4.8 for both authentication and authorization. For those who are not familar with Keycloak it is an open source access and identity manager,  https://www.keycloak.org/.

The authentication piece of this integration was assisted by the use of the use of the OpenId Connect Plugin that is available in the Liferay Marketplace.  Authorization was achieved through the use of the a Liferay post login hook and the Keycloak Restful API.

This integration also involved the use of Apache Httpd for the virtual hosting of both Liferay and Keycloak. This virtual hosting provided SSL support as well.

This blog assumes that Liferay, Keycloak and Apache Httpd are installed and that AJP is enabled on both the Liferay and Keycloak application servers. AJP will be used by Httpd virtual hosting to proxy request to both Liferay and Keycloak.  The following is the Httpd's settings that were used to perform this virtual hosting:

#virtual host for liferay
<VirtualHost *:443>
    ServerName aim1.xxxxxx.net    
    SSLEngine on
    SSLCertificateFile /etc/pki/tls/certs/aim1.xxxxxxx.net.cert.pem
    SSLCertificateKeyFile /etc/pki/tls/private/aim1.xxxxxx.net.key.pem
    
    # Set the header for the https protocol
    #RequestHeader set X-Forwarded-Proto "https"
    #RequestHeader set X-Forwarded-Port "443"     
 
    # Serve /excluded from the local httpd data
    ProxyPass /excluded !
         
    # Preserve the host when invoking tomcat
    ProxyPreserveHost on
         
    # Pass all traffic to a localhost tomcat.
    ProxyPass / ajp://localhost:8009/
    ProxyPassReverse / ajp://localhost:8009/
</VirtualHost>

# virtual host for keycloak
Listen 17443
<VirtualHost *:17443>
    ServerName aim1.aifoundry.net    
    SSLEngine on
    SSLCertificateFile /etc/pki/tls/certs/aim1.xxxxxx.net.cert.pem
    SSLCertificateKeyFile /etc/pki/tls/private/aim1.xxxxxx.net.key.pem
    
    # Set the header for the https protocol
    #RequestHeader set X-Forwarded-Proto "https"
    #RequestHeader set X-Forwarded-Port "443" 
    
    # Serve /excluded from the local httpd data
    ProxyPass /excluded !
         
    # Preserve the host when invoking tomcat
    ProxyPreserveHost on
         
    # Pass all traffic to a localhost tomcat.
    ProxyPass / ajp://localhost:8589/
    ProxyPassReverse / ajp://localhost:8589/
</VirtualHost>

The settings above assume that Httpd, Liferay and Keycloak are all on the same host. If this not the case then localhost should be substituted with a valid IP address or hostname for the Liferay and Keycloak servers. 

Keycloak Configuration

For the Keycloak configuration I created a separate realm for Liferay that had specific client, roles and users. The client configuration is probably the must important part of the realm configuration, especially its redirect URL:

The redirect URL, https://aim1.xxxxxx.net/*, shown above is base URL of the Liferay site I want to authenticate with Keycloak. Note the * wildcard character which allows Keycloak to access everything under this URL, especially /c/portal/login. 

Liferay's OpenID Connect PLugin Configuration

One other assumption that I make is that the OpenID Connect Plugin has been downloaded from the Liferay Marketplace and deployed to Liferay.  Once that is done an additional tab should appear in the Authentication section of the Liferay Control Panel's Instance Settings.  The following are the settings that were used for my instance: One thing to bear in mind because I'm coming through my Httpd front door using SSL the JWT token that was generated by Keycloak is getting encrypted. In order for the plugin to be able to decrypt it the JVM that Liferay is running needs to have its SSL certificates updated  with the certificate the Httpd is using. With the plugin configured and  enabled Liferay can now participate in the SSO being provided by Keycloak. If you get authenticated by Keycloak while accessing another application that it's also protecting you should have a valid JWT token in your browser.  This should allow you to access Liferay without having to login. If you're not yet authenticated the plugin will force you to authenticate with Keycloak. More information on the OpenID Connect Plugin can be found at the following URL: https://github.com/finalist/liferay-oidc-plugin/blob/oidc-parent-0.5.0/README.md Take note of the great sequence diagram located at the end of this readme page. What About Authorization If Keycloak SSO is working correctly we can now log into Liferay with user that are being managed by Keycloak: The OpenID Connect Plugin will automatically add these authenticated users into Liferay's user repository.  These users can also be assigned roles in Keycloak: User roles information can be accessed using the Keycloak's Restful API. I found that the most direct way to this with Keycloaks's API is to use the following restful method which allows you to get a role's membership by its name: GET /{realm}/clients/{id}/roles/{role-name}/users I employed this call in as part of a restful client that gets both the access token and the user's roles:     public String getAdminAccessToken() {         Form form = new Form();         form.param(USERNAME, AUTH_SERVICE_USERNAME);         form.param(PASSWORD, AUTH_SERVICE_PASSWORD);         form.param(GRANT_TYPE, AUTH_SERVICE_GRANT_TYPE);         form.param(CLIENT_ID, AUTH_SERVICE_CLIENT_ID);         Boolean encode = true;         String accessToken = null;         Response response = client.postForm(AUTH_SERVICE_URL, AUTH_SERVICE_TOKEN_PATH, MediaType.APPLICATION_JSON, form,                 encode);         if (response != null) {             AuthToken authToken = response.readEntity(AuthToken.class);             if (authToken != null) {                 accessToken = authToken.getAccessToken();             } else {                 LOGGER.error(COULD_NOT_GET_ADMIN_AUTH_TOKEN);             }         } else {             LOGGER.error(COULD_NOT_GET_ADMIN_AUTH_RESPONSE);         }         return accessToken;     }     public List<String> getUserRoles(String accessToken, String userName) {         if (accessToken == null || accessToken.isEmpty()) {             LOGGER.error(INVALID_ARGUMENT_NO_ACCESS_TOKEN_PROVIDED);             return null;         }                  if (userName == null || userName.isEmpty()) {             LOGGER.error(INVALID_ARGUMENT_NO_USER_NAME_PROVIDED);             return null;         }                          List<String> userRoles = new ArrayList<String>();         for (String authServiceRole : AUTH_SERVICE_ROLES) {             String roleUserPath = AUTH_SERVICE_ROLES_PATH + SLASH + authServiceRole + USERS_PATH;             Response response = client.get(AUTH_SERVICE_URL, roleUserPath, MediaType.APPLICATION_JSON, null,                     accessToken);             if (response != null) {                 List<AuthUser> authUsers = response.readEntity(new GenericType<List<AuthUser>>() {                 });                                  if (authUsers !=null && authUsers.size() > 0) {                     for (AuthUser authUser : authUsers) {                         String authUserName = authUser.getUsername();                         if (authUserName != null && authUserName.equals(userName)) {                             userRoles.add(authServiceRole);                             break;                         }                     }                 }              }         }         return userRoles;     }  There are other ways to do this but they would require multiple restful calls to achieve the same result. Unfortunately there seems to be no way with the current Keycloak API to access a user's roles directly by using a username at this time.  More information of the Keycloak API can be found at the following URL:  https://www.keycloak.org/docs-api/4.8/rest-api/index.html#_roles_resource The user's Keycloak roles can be synchronized with corresponding Liferay roles using the Liferay API. This can be triggered by a post login event. Here's a method that I have which gets invoked once the user has been successfully authenticated   public String getScreenName() {         ExternalContext externalContext = FacesContext.getCurrentInstance().getExternalContext();         HttpServletRequest request = PortalUtil.getHttpServletRequest((PortletRequest) externalContext.getRequest());         HttpServletRequest originalRequest = PortalUtil.getOriginalServletRequest(request);                  HttpServletResponse response = PortalUtil                 .getHttpServletResponse((PortletResponse) externalContext.getResponse());         try {             User user = PortalUtil.getUser(originalRequest);             screenName = user.getScreenName();             LOGGER.info("screenName=" + screenName);                          String reminderQueryQuestion = user.getReminderQueryQuestion();             if (reminderQueryQuestion == null || reminderQueryQuestion.isEmpty()                     || reminderQueryQuestion.equals(OPENID_CONNECT_REMINDER_QUESTION) == false) {                 LOGGER.info("Not an openId Connect user");                                  response.sendRedirect(landingPage);                 return screenName;             }             String adminAccessToken = AUTH_CLIENT.getAdminAccessToken();             LOGGER.info("adminAccessToken=" + adminAccessToken);             if (adminAccessToken != null && adminAccessToken.length() > 0) {                 long userId = user.getUserId();                                  if (screenName != null && screenName.length() > 0) {                     List<String> userRoles = AUTH_CLIENT.getUserRoles(adminAccessToken, screenName);                     LOGGER.info("userRoles=" + userRoles);                     if (userRoles != null && userRoles.size() > 0) {                         for (String userRole : userRoles) {                             Role role = RoleLocalServiceUtil.getRole(user.getCompanyId(), userRole);                             long roleId = role.getRoleId();                             boolean hasUserRole = RoleLocalServiceUtil.hasUserRole(userId, roleId);                             if (hasUserRole) {                                 LOGGER.info(screenName + HAS_FOLLOWING_ROLE + role.getName());                                                                 continue;                             }                                                          RoleLocalServiceUtil.addUserRole(userId, roleId);                             LOGGER.info(ADDED + screenName + TO_FOLLOWING_ROLE + role.getName());                         }                                                  List<String> AUTH_SERVICE_ROLES = AuthClient.getAUTH_SERVICE_ROLES();                         List<Role> portalUserRoles = user.getRoles();                         for (Role portalUserRole : portalUserRoles) {                             String portalRoleName = portalUserRole.getName();                             if(AUTH_SERVICE_ROLES.contains(portalRoleName)) {                                 if(!userRoles.contains(portalRoleName)) {                                     RoleLocalServiceUtil.deleteUserRole(userId, portalUserRole.getRoleId());                                      LOGGER.info(DELETED + screenName + FROM_THE_FOLLOWING + portalRoleName);                                 }                             }                         }                                                  if (user.getRoleIds().length <= 1) {                             throw new Exception(screenName + USER_HAS_NO_ROLES);                             }                     } else {                         throw new Exception(screenName + USER_HAS_NO_ROLES);                                         }                 }             }                          response.sendRedirect(landingPage);         } catch (Exception e) {             LOGGER.error(e.getLocalizedMessage());             try {                 response.sendRedirect(errorPage);             } catch (IOException e1) {                 LOGGER.error(e.getLocalizedMessage());             }         }                  return screenName;     }   The code above will both add or remove  a user to or from a roles in Liferay , respectively, based on the the corresponding roles that they have in Ke ycloak. I hope some will find this blog useful and that it saves them some time.   William Gosse 2019-01-30T16:57:00Z

Categories: CMS, ECM

New Liferay Project SDK Installers 2019.01.21

Fri, 01/25/2019 - 01:26

The new release of Liferay Project SDK and Studio Installers has been made available today. This new package support for Eclipse photon or greater.

Download

For customers, they can download all of them on the customer studio download page.  Community downloads https://community.liferay.com/project/-/asset_publisher/TyF2HQPLV1b5/content/ide-installation-instructions Release highlights Installers Improvements:

  • Bundle latest Liferay Portal 7.1.1 GA2 in LiferayProjectSDKwithDevStudioCommunityEdition installers
  • Support userHome parameter in command line mode
Development Improvements:
  • Update gradle plugin buildship to latest 3.0.2
  • Better support for liferay workspace
  • Watch support improvements
  • Better deployment support for Liferay 7
    • support Liferay 71 CE GA2 Tomcat and Wildfly
    • integration of Blade CLI 3.4.1
    • support Liferay Workspace Gradle 1.10.13
  • Miscellaneous bug fixes
    Feedback If you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck! Yanan Yuan 2019-01-25T06:26:00Z
Categories: CMS, ECM

New Liferay Project SDK Installers 2018.11.4

Fri, 01/25/2019 - 00:57

Key Features

  • Watch improvements on gradle liferay workspace
  • Add Target Platform in new liferay workpsace wizard
  • Bug fixes

 

Upgrade From previous 3.x

  • Download updatesite here
  • Go to Help > Install New Software… > Add…
  • Select Archive... Browse to the downloaded updatesite
  • Click OK to close Add repository dialog
  • Select all features to upgrade then click > Next, again click > Next and accept the license agreements
  • Finish and restart to complete the upgrade
  About Watch Users can find a liferay workspace node under their servers when they develop on gradle liferay workspace. With this users can watch a large amount of projects at the same time.   Target Platform in Wizard Dependencies management for Target Plarform has been enabled by default when you create a gradle liferay wrokspace. Yanan Yuan 2019-01-25T05:57:00Z
Categories: CMS, ECM

Liferay IntelliJ Plugin 1.2.1 Released

Thu, 01/24/2019 - 03:12

The latest release of Liferay IntelliJ 1.2.1 plugin has been made available today. Head over to this page for downloading. Release Highlights:

  • Wizards
    • Added new module ext wizard
    • Add target platform option in new liferay workspace wizard
  • Editor Improvements
    • code completion
      • java bean for Liferay Taglib
      • more OSGi component properties
      • additional hints for model-hints xml files
      • more resource bundle keys for Liferay Taglib
    • better support for  bnd.bnd files
  • Support quick fix for gradle dependencies
  • Update embedded blade to 3.4.1
  • Bug Fixes
  Wizards Users can find the new wizard through clicking on File > New > Liferay Module. Target platform option is added to new liferaygradle workspace project wizard.   Using Editors         Quick Fix Quick fix is enabled by default if target platform has been set.   Known Tickets INTELLIJ-34   Special Thanks Thanks to Dominik Marks for the improvements.   Feedback If you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (INTELLIJ project), we are always around to try to help you out. Good luck! Yanan Yuan 2019-01-24T08:12:00Z
Categories: CMS, ECM

Intellij IDEA - Debug Liferay 7.1  

Tue, 01/22/2019 - 03:22

Problem:

Running Liferay 7.1 from IDEA in Debug mode throws the following error by default:

java.lang.NoClassDefFoundError: com/intellij/rt/debugger/agent/CaptureStorage

However, Run mode works without any issues.

 

Fix:

Disable Instrumenting agent in "Settings | Build, Execution, Deployment | Debugger | Async Stacktraces":

Hope, this will help :)

Vitaliy

Vitaliy Koshelenko 2019-01-22T08:22:00Z
Categories: CMS, ECM

Liferay Portal 7.1 CE GA3 Release

Mon, 01/21/2019 - 14:40
Overview New Features Summary
  • Oracle OpenJDK 11 -  GA3 has been tested for use with Oracle OpenJDK 11.  For more information on JDK 11 support in Liferay CE/DXP please see the JDK Roadmap post.  Also check the known issues section below for an issue and workaround related to JDK 11.  
  • Clustering Returns - GA3 now includes clustering support out of the box once again.  For more information see this announcement for clustering.  Also see the official documentation for updated info on configuring clustering.  
  • Liferay Hypermedia REST APIs - We recently announced a beta release for Liferay Hypermedia REST APIs.  In addition to Liferay DXP, Liferay Portal 7.1 CE GA3 now supports the beta release.  For more information see the official Liferay Hypermedia REST APIs site.
Documentation Official Documentation can be found on Liferay Developer Network.  For information on upgrading, see the Upgrade Guide. Bug Reporting If you believe you have encountered a bug in the new release you can report your issue on issues.liferay.com, selecting the "7.1.2 CE GA3" release as the value for the "Affects Version/s" field. Known Issues
  • LPS-86955: Use alternate browser from IE 11.  
  • LPS-88877: Remove license file from lpkg file and deploy application manually. A more permanent fix is coming to GA4.  
  • LPS-87421: Set properties included in ticket.
Getting Support Support is provided by our awesome community. Please visit our community website for more details on how you can receive support. Liferay and its worldwide partner network also provides services, support, training, and consulting around its flagship enterprise offering, Liferay DXP. Also note that customers on existing releases such as 6.2 and 7.0 continue to be professionally supported, and the documentation, source, and other ancillary data about these releases will remain in place. Kudos A thanks goes out to our engineering and QA teams who spent countless hours developing, testing, translating and writing documentation to ensure a stable release that meets and hopefully exceeds our expectations!   Jamie Sammons 2019-01-21T19:40:00Z
Categories: CMS, ECM

Session Storage is Evil

Thu, 01/17/2019 - 13:19
Introduction

For folks that know me, they know one of my favorite soapbox rants is on HTTP and/or Portlet session storage.

Recently my friend Kyle Stiemann wrote a blog about Session Storage in Liferay, and he reached out to me to proof the blog post before it was published. While it was really well written and informative, I must admit I didn't want it published. Providing a convenient place showing how to use sessions, even with all of the warnings, to me seemed almost like promoting session storage too much. I am very much against session storage usage, but could only find one other reference that shared my opinion: https://dzone.com/articles/rules-thumb-dont-use-session Since I hadn't really made my rant public before, and since I've been getting questions lately about session usage, I thought it was about time that I make my case publicly so it's available for all to see (or trash, as the case may be). Session Storage is Evil There, I said it. Session storage is evil. I'll go even farther - if a developer uses session storage of any kind, it demonstrates that the developer either doesn't care about the runtime impact or doesn't know about the runtime impact, and I'm not really sure which is worse. Session Storage as the Sirens Song For developers, session storage is akin to a sirens song. Sailors, hearing the siren's song, would steer their ships upon the rocks, leading to destruction and their death. Session storage is the same for developers. It is so darn easy to use. It has been part of the javax.servlet.http.HttpServletRequest class since the very first release. There are tons of examples for using session storage for developers to reference online. It is presented to new Java developers who are learning to build servlets. And considering other temporary unstructured data storage, it is so simple. So it definitely has its allure. So how can it be evil? How Session Storage is Evil Session storage is not evil from a developers perspective, but they are absolutely evil from deployment and runtime perspectives. Here's a short list of why session storage is evil: 1. Session Storage Consumes Server Resources. Although this may sound obvious, it may surprise developers if load and capacity were not considered during development. As a developer, it might seem trivial to store a list of objects retrieved for the user into their session. The code is easy and unit and system testing will not reveal any obvious defects. Problems may only surface during load testing. Let's consider a calendar implementation. Imagine a system where, when a user logs in, their list of upcoming calendar events is retrieved and stored in their session. The idea was that this would offer a significant performance boost by not retrieving the data from the database every time the user refreshes the page. Such a system would be easy to code, easy to unit and system test. After all, we're going to do our testing with some relatively small number of events, the session storage aspect will work out fine and performance would be great. Now consider that an event is, say, on average 250 bytes. Then say on average, a user would have 20 events on their calendar at any given time. Rough math gives us on average 5k then for each user. Now given that session storage is in-memory only, further rough math says that this system will accommodate about 200 users per MB of memory. These kinds of numbers define what our capacity is going to be for mostly concurrent users at any given time. If the numbers used for the average increase, i.e. you add a description string to the event and events grow to an average of 500 bytes per, this will decrease your capacity by half. And it is "mostly concurrent" because session storage is only reclaimed if a) the user logs out or b) the user's session times out. You cannot expect that every user will always log out, in fact you should plan on the worst case scenario that users never log out and their 5k of calendar events will remain in memory until the session expiration timeout. Factoring these things together, you can start to see how session data can actually start to consume the system resources and can negatively effect node capacity. 2. Session Storage is Implemented using Object Serialization. All objects that will be stored to the session must implement java.io.Serializable. So on the surface, that might not seem like a big deal. And maybe for some use cases, it isn't a big deal. If you control the classes that you will be pushing to the session, serializability is easy to include. The problem comes when you do not control the classes that you want to push to the session. Maybe these classes come from another team in your organization, or maybe the classes come from a third party library or Liferay itself. When you don't have control over the classes, you may not be able to make the classes serializable so they might not be compatible with session storage. And honestly, developers are really, really bad about implementing serialization. I guarantee that few developers actually follow the best practices for using Serializable. If you think you're the exception, check out http://www.javapractices.com/topic/TopicAction.do?Id=45 or https://www.javacodegeeks.com/2010/07/java-best-practices-high-performance.html or https://howtodoinjava.com/java/serialization/a-mini-guide-for-implementing-serializable-interface-in-java/ for good Serialization usage. Compare that to yours or Liferay's code to see if you can find an instance where Serialization is implemented according to best practices. Did you know that serialized data is not really secure? Serialized objects capture the data the instances contained, but by default it is not going to be encrypted at rest. Look for the .ser files from Tomcat after session storage to determine if your data is exposed. Serialized data also has issues. OWASP defines a vulnerability inherit when deserializing untrusted data: https://www.owasp.org/index.php/Deserialization_of_untrusted_data. If you are set up to persist session data across restarts, the reality is that this data must be considered untrusted because there are no guarantee that the serialized data has not been tampered with. Finally, as serialization is seen as the source of many Java security defects, there are reports that Oracle plans on dropping support for serialization: https://www.bleepingcomputer.com/news/security/oracle-plans-to-drop-java-serialization-support-the-source-of-most-security-bugs/.  When and if this happens, this will likely force changes on how session storage is handled. 3. Session Data is Bound to One Node. When using session data, it is normally only stored only on a single node. If you only have a single production application server, then no problem because the stored data is where it needs to be. But if you have a cluster, data stored in a session on a node is not normally available across the cluster. Without some kind of intervention in the deployment, data stored to a session for a user is only available to the user if their request ends up back on the same node. If they end up on a different node, the session data is not available. Switching nodes can happen automatically if the node the user stored the data on either crashed or was shut down. In an OOTB session configuration in Tomcat, sessions can be used to store data, but this data is not persisted across restarts. So any data stored in the session is lost if the node is either stopped or crashed; so even if you can restart a failed node, the session data is lost. You can configure Tomcat to persist session data across restarts, but even in this configuration when the node is down, the session data is not available since it is bound to that specific node. Plus you still have all of the issues with serialized data from #2 above to deal with. The first wave of Java shopping cart implementations used session storage for all of the cart items. It was super easy to use as a temporary, disposable store of data not worthy of longer-term persistence. But customers had hard times using these carts because they would sometimes see their cart items disappear. This would happen if the customer got switched to a new instance because the node crashed, the load balancer redistributed traffic or the node was taken down for maintenance. 4. Session Replication is Costly and Consumes Resources. One solution for the loss of the node w/ all of the session data was to introduce session replication. Session replication copies data stored in session to all of the other nodes in the cluster. Since this is not an OOTB solution for most app servers, the solution requires additional server(s) and software. There is no standard for session replication, so each offering is custom and leads to lock-in. Often times at the start of a project these additional costs are not planned for; they usually crop up at the end of the project when the operations team is trying to figure out how to fix an implementation that was broken due to use of session storage, so you get an end of project surprise implementation cost. Once the replication stuff is in place, there is still the ongoing overhead to deal with. Every session data update will result in additional network overhead (to share the update). In some cases, replication is done by copying to all nodes (in a mesh) which has a large amount of overhead, in some cases session storage is centralized to minimize on network overhead. In either case, operationally you are adding another possible point of failure in the infrastructure. What happens if your session data container crashes? How will your application recover? Is it even operable at that point? What are the disaster recovery concerns? How do you fail over gracefully? In a distributed cluster, how do you handle latency issues or network issues between the regions? How do you monitor availability of your session replication infrastructure? How do you debug issues arising from session replication? Will existing session data be available to a new node coming online or is there some amount of syncing that needs to be done? As developers, we often don't have to think about any of these kinds of issues. But I guarantee that these issues exist and must be planned on from an operations perspective. Remember the math example from item #1 above? For session replication, all of the math gets multiplied by the number of nodes deployed in the cluster. The replication solution needs to be able to store as much session data as is generated by X number of nodes, each under peak load; anything less could potentially lead to data loss. And the "mostly concurrent" aspects encountered by users not manually logging out, this is an additional factor that affects the sizing of your replication solution. 5. Sticky Sessions Unbalance Resource Consumption. Another option often used with session storage is the sticky sessions. In sticky sessions, the load balancer is configured to send traffic originating from the same host to the same target node. This ensures that a user will have access to the data stored in their session. It is the lightest-weight solution for stored session data use since it won't require additional hardware/software, but it does have its own serious drawbacks. If the node crashes or is taken out, the user loses access to the session data and the UX will not be good. The load balancer in this situation will switch the user to another node, but the data in the session is still not available. However, while the node is up, all traffic originating from the same origin will always go to the same node. In an autoscaling scenario, sticky sessions work against being able to distribute traffic amongst the nodes. If you have a two node cluster and both nodes are saturated, when you spin up a third node it will only receive requests from new origins; the two saturated nodes will remain saturated because the sticky session logic binds origins to nodes. Ideal Solution So what's the ideal solution? Avoid session storage altogether. Seriously, I mean that. The benefits are tremendous:

  • No resource consumption for session data.
  • No additional network chatter to broadcast session data for replication.
  • Load balancing able to shift load across the cluster based on capacity.
  • No additional costs in sizing the nodes or session replication solutions.
  • No autoscaling issues.
  • No security concerns.
  • No lingering data waiting for session timeouts.
  • No developer impact to correctly implement Serializable interface for internal and external code.
Most often the pushback I get on this is a developer who needs to stash temporary, unstructured data for a short period of time. If session storage is taken off the table, what is left? The database, of course. If your data is a Java object graph, you can marshal that into JSON or XML and store the string in the database if it must be completely unstructured. Or you could use a NoSQL solution like MongoDB or even Elasticsearch to hold the data. For wizard-like forms, you can carry forward form field values into hidden fields, allowing the temporary data storage to occur in the client's browser instead of your application server. There are just so many solid, cluster-friendly ways to carry this data around. All it takes is a good architecture and design. And the general desire to avoid the evil that comes with session storage... If you are advocating using session storage, consider the items above. If a coworker is using session storage, call them out on it as soon as possible. If a potential candidate proposes using sessions to store data, question whether the candidate understands the runtime issues that plague session storage. If a contractor wants to use session storage, get a new contractor. Follow the advice of Odysseus; fill your ears with beeswax and avoid the Sirens Song sung by Session Storage... David H Nebinger 2019-01-17T18:19:00Z
Categories: CMS, ECM

Liferay ThemeDisplay in Angular apps

Wed, 01/16/2019 - 15:48

For a recent project, we developed an Angular application that uses Liferay DXP as the back end. In this Angular application we regularly make requests into the Portal back end for User information, Journal Articles, ... Mostly this works great as we have defined several custom REST end points in DXP from which we can make use of the Liferay services and other nifty Liferay stuff.

However at one point we encountered an issue where one of our end points did not return the expected Journal Articles. In fact it even threw an exception underneath.

After some short investigation we determined that the Audience Targeting rules were not being applied properly to the Journal Articles. The actual cause is that several Audience Targeting rules make use of the Liferay ThemeDisplay object. You probably know that this is one of the standard Liferay objects available in each portlet request as an attribute under the logical name themeDisplay. And that it can be retrieved by using following code:

request.getAttribute(WebKeys.THEME_DISPLAY); What seems to be the officer, problem?

As I have mentioned we make use of  REST end points and thus not of portlet requests. These REST requests are made from our Angular front end to custom end points in a Liferay environment, so called Controllers. Below is an example to retrieve some news articles. The first block of code is taken from our Angular service that makes the request. The second block of code is from the Java Controller which will respond to the request and returns news articles.

public findNewsArticles(limit?: number): Observable<Response<Array<NewsArticle>>> { const params = this.createHttpParams(limit); const url = this.articlesUrl(); return this.http.get<Response<Array<NewsArticle>>>(url, {params: params}); } @GET @Path("/group/{groupId}/articles") @Produces(MediaType.APPLICATION_JSON) public Response getNewsArticles(final @Context HttpServletRequest request, final @Context HttpServletResponse response, @PathParam("groupId") final long groupId, @QueryParam("limit") final long limit) throws Exception { long aLimit = limit == 0 ? Long.MAX_VALUE : limit; User user = portal.getUser(request); List<UserSegment> userSegments = userSegmentService.getUserSegments(groupId, request, response); return newsService.findByUser(user, groupId, aLimit, userSegments) .map(this::successResponse) .orElseGet(this::notFoundResponse); }

The problem which we encountered is that these REST requests don't carry a ThemeDisplay object. Because of this the list of userSegments in above example is not correctly constructed as the rules can't properly determine whether or not a user belongs to a certain segment.

So some way or another we needed to be able to pass the ThemeDisplay object from the front end to the back end through the custom REST requests. We contacted Liferay support using a LESA ticket to get this straightened out. With their help it became clear we needed to add the ThemeDisplay object manually to the REST request. Following the standard Liferay JSON WS documentation, it should be possible as follows:

Liferay.Service( '/api/jsonws/journal.journalarticle/get-article-content', { groupId: 20126, articleId: '40809', languageId: 'en_US', themeDisplay: { locale: "en_US" } }, function(obj) { console.log(obj); } ); In code we trust

As is mostly the case when in doubt with how to achieve something in Liferay: turn yourself towards the source code!
In the default Liferay JSON WS the ThemeDisplay object is actually passed as a form parameter from the front end to the back end. This also requisites that the request is actually a POST request. In the back end there are subsequent servlet filters that operate on these requests. One of these filters transforms the themeDisplay JSON string into a Java object and adds it as a request attribute.

So it became clear to us that we also needed to make adjustment in both ends.

Front end

Of course there are several ways to pass the ThemeDisplay object from the front end. To prevent much changes to our existing code base and because it is a fast & easy manner, we chose to pass the ThemeDisplay object as a header in the REST requests. This is very convenient because we can now add a plain JavaScript object, just as in the Liferay JSON WS examples. So we only need to construct a JSON string with all the necessary fields and their values.

Here be dragons
You can make use of the Liferay.ThemeDisplay object which Liferay provides, if you are still in a Liferay environment. However you cannot use this object itself als the value. It consists of functions and if those are parsed to a string, they just become null. But you can make use of the Liferay.ThemeDisplay object to populate your own object. You will see this in the following examples.

To limit to duplication of code we also made use of an Interceptor. This is an Angular component that will inspect every HTTP request and potentially transform it. As the ThemeDisplay object isn't required for each and every request but solely to those where it's actually necessary for our implementation, we added a small url validation. So in our Angular app we added below themedisplay.interceptor.ts component: @Injectable() export class ThemeDisplayInterceptor implements HttpInterceptor { intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { if (this.matchingUrl(req)) { const modified = req.clone({setHeaders: { 'themeDisplay': `{"languageId": "` + Liferay.ThemeDisplay.getLanguageId() + `"}` } }); return next.handle(modified); } return next.handle(req); } private matchingUrl(req: HttpRequest<any>) { return req.method == 'GET' && req.url.match('\/o\/acanews' + '|\/o\/acatasks' + '|\/o\/acafaqs' + '|\/o\/acabanners' ); } } Back end When the ThemeDisplay is passed as a string as in above example, it enters the back end as a header on the HttpServletRequest. So here are two issues to solve:

  1. the ThemeDisplay object is not yet in the Liferay essential location
  2. the object is of type String
Thankfully this can be resolved without much effort. Get the String header from the request, convert it to a ThemeDisplay object and add it back to the request under the correct attribute. The conversion seems to be the most complex issue. But thankfully Liferay as a lot of useful utilities and in this case it would be the JSONFactoryUtil. Using this utility you can easily transform a JSON String to a certain typed object. Below method executes all these actions at once: private void convertThemeDisplay(HttpServletRequest request) { String themeDisplay = request.getHeader(THEME_DISPLAY_PARAM_NAME); if (!isEmpty(themeDisplay)) { ThemeDisplay td = JSONFactoryUtil.looseDeserialize(themeDisplay, ThemeDisplay.class); request.setAttribute(WebKeys.THEME_DISPLAY, td); } } The isEmpty check in above code is necessary to prevent any NullPointerExceptions in case there wouldn't be a themeDisplay header present. So we used this method in a custom ServletFilter so it can be executed on all requests without changing any of the existing controllers. By providing url-patterns in the @Component definition, it is also possible to just change those requests deemed necessary: @Component( immediate = true, property = { "servlet-context-name=", "servlet-filter-name=ThemeDisplay Filter", "url-pattern=/o/acanews/*", "url-pattern=/o/acatasks/*", "url-pattern=/o/acafaqs/*", "url-pattern=/o/acabanners/*" }, service = Filter.class ) public class ThemeDisplayFilter extends BaseFilter { private static final Log LOGGER = LogFactoryUtil.getLog(ThemeDisplayFilter.class); private static final String THEME_DISPLAY_PARAM_NAME = "themeDisplay"; @Override protected void processFilter(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws Exception { convertThemeDisplay(request); super.processFilter(request, response, filterChain); } private void convertThemeDisplay(HttpServletRequest request) { String themeDisplay = request.getHeader(THEME_DISPLAY_PARAM_NAME); if (!isEmpty(themeDisplay)) { ThemeDisplay td = JSONFactoryUtil.looseDeserialize(themeDisplay, ThemeDisplay.class); request.setAttribute(WebKeys.THEME_DISPLAY, td); } } @Override protected Log getLog() { return LOGGER; } } Well... There it is By performing the necessary front and back end changes, we can now make proper use of Audience Targeting rules in the Controllers for our Angular application. In the front end we intercept the desired REST requests and add a header with the ThemeDisplay (JSON) String. It is sufficient to only add to this object the necessary fields for the Audience Targeting rules. In the back end we retrieve this String from the headers, deserialize it to an actual ThemeDisplay object and add it to the request attributes under the WebKeys.THEME_DISPLAY name. All this occurs in a ServletFilter that will only respond to the desired url patterns. At the moment it is not possible to just pass the entire Liferay.ThemeDisplay object that Liferay provides OOTB. This object does not contain any field with values but is constructed out of functions. So we need to construct the ThemeDisplay object ourselves. By using an Angular Interceptor and a Liferay Servlet Filter, any request can be updated transparently without touching too much code.   Koen Olaerts 2019-01-16T20:48:00Z
Categories: CMS, ECM

New on Liferay University

Tue, 01/15/2019 - 18:15

It's time for a new announcement for new content on Liferay University. A new year brings new content. Happy new year!

Liferay Devops

Do you know how to set up a cluster? How to backup (or even better: restore) your installation? In Liferay Devops we cover this and more. Based on Docker, this course guides you through the exploration of a lot of the day-to-day operations of a successful installation and maintenance.

You'll find this course on Liferay University, and of if you're a passport customer, your investment just got more valuable: Of course it's added to your curriculum on Passport. Topics in this course include:

  • All infrastructure managed in Docker
  • Installation on Tomcat and Wildfly
  • Options for Configuration of Liferay
  • Configuring Remote Staging
  • Clustering for Load and Fault Tolerance
  • Liferay in the Cloud
  • Continuous Integration / Delivery / Deployment
  • Interfacing with LDAP
  • Monitoring, Hardening and Securing
  • Installation of Fixpacks
  • Backing up Liferay
  • Aspects for Upgrades
  • and many more
The instructor on this course is yours truly, and I'm looking forward to seeing you sign up for the course. Get your Docker running, then give it something to do. Manage Liferay in the Cloud with DXP Cloud You may have heard about Liferay's new offer: A managed platform for your DXP installations. And if you've wondered how to best use it, you now can tap into the DXP cloud's team knowledge, using the new free lesson on Liferay University. Of course, this is also included in your Liferay University Passport. Head over to University or Passport and start the lesson. 42 minutes of video are waiting for you, covering these topics:
  • Introducing Liferay DXP Cloud
  • Deploying for the first time
  • Collaborating with Team Members
  • Debugging Problems and Monitoring
  • Connecting with Internal Systems
  • Backing up and Restoring Data
  • Receiving Alerts and Auto Scaling
Your instructor is Zeno Rocha, the Chief Product Officer for DXP Cloud - you can't learn the insides from a better source. Learn as much as you can The offer that you can't refuse: Liferay University Passport, our flat rate for each and every course on Liferay University, was available for an introductory price at almost 30% discount that should have been over by now. It includes personal access to all material on university for a full year - learn as much as you can. The offer was scheduled to expire at the end of the year - but while I'm writing this, it's still available - so sign up quickly, or regret. Prefer a live trainer in the room? Of course, if you prefer to have a live trainer in the room: The regular trainings are still available, and are updated to  contain all of the courses that you find on Liferay University and Passport. And, this way (with a trainer, on- or offline) you can book courses for all of the previous versions of Liferay as well. And, of course, the fine documentation is always available and continuously updated. (Photo: CC by 2.0 Hamza Butt) Olaf Kock 2019-01-15T23:15:00Z
Categories: CMS, ECM

How can I share session data across portlets in Liferay?

Thu, 01/10/2019 - 20:00

Before I answer this question, consider:

What are you actually trying to accomplish? Are you sure that you need share session data to fulfill your goal? Can you use another (better) method to share data between several portlets across several requests?

David Nebinger, Liferay expert and prolific community member, has said that "Session Storage is Evil." Perhaps you could dismiss his statement as hyperbole, but considering all his supporting evidence (and other experts who have echoed his concerns), 1 I think that would be a mistake. Storing (and sharing) data in the session has many drawbacks. For example, sessions must be replicated in a cluster or risk data loss and a disruptive user experience, data in the session must be accessed/modified in a thread-safe way, and any session data increases your application’s memory footprint and reduces scalability. More importantly, the HttpSession (which the PortletSession is based on) is not designed for sharing data between applications (see Servlet Spec 3.1 Section 7.3: Session Scope). Finally, sharing data via the session creates tight, opaque coupling between applications. So before you store and share any data via the PortletSession, consider the following alternatives: 1. Use Public Render Parameters to share string values between portlets. Ultimately, Public Render Parameters (PRPs) are stored in the session (in Liferay’s implementation at least), so they still have some similar drawbacks to session storage. However, PRPs are limited to strings, so their impact on memory is less extreme than other objects that might be stored in the session. Also PRPs are more clearly defined and allow more loose coupling between portlets than shared session data. They also provide bookmarkability with idempotent URLs. Use PRPs to share data such as database keys and similar strings that allow other portlets to determine what or how certain data should be displayed. PRPs should not be used to mutate model data in other portlets, unlike Portlet Events. 2. Use Portlet Events to publish events which can mutate model data that is shared by multiple portlets. Events don’t use session storage, so they will usually have less of an effect on your application’s memory footprint than session data. Although events might require more complex code than PortletSession.get/setAttribute(), events (like PRPs) enable clearly-defined, loose coupling between portlets via the publish/subscribe design pattern. Use Portlet Events when an action in one portlet may be interesting and even provoke changes in data in another portlet. 3. Use Service Builder to cache data and persist data to the database. Persisting data to the database with Service Builder is an extreme option with potential drawbacks for scalability and increased complexity. Creating a new Service with Service Builder, potentially adds two new JARs to your project. However, it will ensure that the data is available to all portlets via the service APIs. You can also enable caches to improve scalability. Perhaps the data you are sharing in session storage is not really temporary or it should be preserved even if a server goes down. In that case, you should use Service Builder. How to Share Session Data Between Portlets: Hopefully 99.9% of people have decided to use PRPs, Events, or Service Builder to share data between portlets and have stopped reading at this point. But for the sake of the 0.1% who legitimately need to use session sharing, I’ve provided a list of ways that you can do that. However, I must again urge caution when relying session sharing and using session storage in general. Please make sure that you are following best practices so that the “easy-to-use” session doesn’t become a source of impossible-to-debug problems. To that end, I encourage you to use the earlier entries in this list and limit the state that is shared between the portlets as much as possible.2

  1. Use PortletSession.APPLICATION_SCOPE to share one or more session attributes between portlets in the same application/WAR. For example: // Portlet A portletRequest.getPortletSession(true) .setAttribute(CONSTANTS.ATTR_NAME, "value", PortletSession.APPLICATION_SCOPE); // Portlet B (in the same WAR) String attrValue = portletRequest.getPortletSession(true) .getAttribute(CONSTANTS.ATTR_NAME, PortletSession.APPLICATION_SCOPE); Pros:
    • Portlet standard method of accessing/sharing session scoped data between portlets.
    • Only exposes the necessary attribute(s) to other portlets (instead of exposing the entire session).
    • Only exposes the data to portlets in the same application/WAR.
    Cons:
    • Cannot share data between portlets in different WARs.
  2. Use Liferay session.shared.attributes prefixes (such as LIFERAY_SHARED_) to share one or more session attributes between portlets in different applications/WARs. Liferay exposes certain session attributes to all portlets based on certain prefix values. Although these prefixes are configurable via portal-ext.properties, I recommend using one of the default prefixes: LIFERAY_SHARED_. For example: // Portlet A portletRequest.getPortletSession(true) .setAttribute("LIFERAY_SHARED_" + CONSTANTS.ATTR_NAME, "value", PortletSession.APPLICATION_SCOPE); // Portlet B (in a different WAR) String attrValue = portletRequest.getPortletSession(true) .getAttribute("LIFERAY_SHARED_" + CONSTANTS.ATTR_NAME, PortletSession.APPLICATION_SCOPE); Pros:
    • Only exposes the necessary attribute(s) to other portlets (instead of exposing the entire session).
    Cons:
    • Exposes session attribute(s) to all portlets.
    • Tight coupling without indicating which other portlets might be utilizing this data.
    • Non-standard method of sharing session data.
  3. Use <private-session-attributes>false</private-session-attributes> (in your liferay-portlet.xml ) to share all session data between portlets in different applications/WARs. For example: Portlet A: liferay-portlet.xml: <liferay-portlet-app> <portlet> <portlet-name>portlet.a</portlet-name> <!-- your config here --> <private-session-attributes>false</private-session-attributes> <!-- ... --> portletRequest.getPortletSession(true) .setAttribute(CONSTANTS.ATTR_NAME, "value", PortletSession.APPLICATION_SCOPE); Portlet B: liferay-portlet.xml: <liferay-portlet-app> <portlet> <portlet-name>portlet.b</portlet-name> <!-- your config here --> <private-session-attributes>false</private-session-attributes> <!-- ... --> // Portlet B (in a different WAR) String attrValue = portletRequest.getPortletSession(true) .getAttribute(CONSTANTS.ATTR_NAME, PortletSession.APPLICATION_SCOPE); Pros:
    • Exposes all PortletSession.APPLICATION_SCOPE attributes without the need for special prefixes.
    Cons:
    • Exposes all PortletSession.APPLICATION_SCOPE attributes to all portlets with <private-session-attributes>false</private-session-attributes>.
    • Tight coupling without indicating which other portlets might be utilizing this data.
    • Non-standard method of sharing session data.
    • May cause memory leaks or other failures in standard frameworks, such as the JSF Portlet Bridge.
  4. Access the HttpSession by unwrapping the HttpServletRequest from the PortletRequest. Don’t do this. It’s not guaranteed to work in Liferay 7.0+ since you may not be able to access the app server’s HttpServletRequest and you can accomplish essentially the same goal with #3.
  1. For more details see this forum post by Olaf Kock (top Liferay answerer on StackOverflow) and Mark Needham’s Rules of Thumb: Don’t Use the Session blog.
  2. The more data is shared between portlets, the more it starts to look like (evil) global state to me.
Kyle Joseph Stiemann 2019-01-11T01:00:00Z
Categories: CMS, ECM

Let’s move your pages

Sun, 12/16/2018 - 17:00


Since Liferay 7.1, pages management has been completely redesigned. Now the navigation tree presented to end-users are controlled by the new Navigation Menus rather than the position of pages in the pages administration screen (now using the Miller columns visualization system).

Unfortunately, by the time these lines are written (i.e. with Liferay 7.1 GA2), the position of a page cannot be updated: you can’t change the parent nor the order across siblings. This is not a business critical issue because the pages positions are not used to display menus to end-users... But it’s really very uncomfortable for website administrators because it’s not possible to organize pages for a decent administration experience. 

So, I have developed a small free app providing the capability to change page position, simply by drag and drop. Once the app installed, you can change a page position by dropping the page between other pages or on a page title (in this case the dropped page become the first child of the targeted page).

 

As a cherry on the cake, you can also change the parent of a page by dragging it on a breadcrumb item.

This open-source app is available on the Liferay Marketplace: https://web.liferay.com/fr/marketplace/-/mp/application/116738420.   I hope this will help you until this feature is integrated natively into the Liferay product. Sébastien Le Marchand Freelance Technical Consultant in Paris Sébastien Le Marchand 2018-12-16T22:00:00Z

Categories: CMS, ECM

Liferay Forms and Data Providers

Fri, 12/14/2018 - 05:24

While trying to give your users the best experience ever you always try to keep things simple. In the past I have filled in many forms and many times with the same data and many times at the same site. Why?

As a simple example I took a free 'zipcode to address' service in the Netherlands and used this to show how easy this can be done. And this is what it looks like.

Todo this I needed to call the service in a different way since the service required  "X-Api-Key" in the header and that's not supported in the data-provider. So, with a simple JAX-RS service I wrote a wrapper to simplify the call so it could be used with the data-provider. Once I had this working it took me just a few minutes to create the form and add the rules to initiate the lookup. The nice thing is I can also use this wrapper in my Liferay Commerce environment to easily add a delivery address.

This is such a powerful piece I started to think further. The service returns information like streetname, province, city, geo-location, when it was build and type of address. With all this information you can start building a profile for users. Every X times they visit your website you can ask them to provide a small piece of the puzzle. And by collecting all this information you can further improve the UX if used in the right way. E.g. you organize an event. Ask people to register and based on geo-location you know what would be the best possible location to organize this event closest to your visitors.

With Liferay Forms it's really easy to start collecting all this information and since it's all stored in Elasticsearch you can use Kibana to create dashboards, drill down and find patterns. Don't forget to think about GDPR compliance and how you protect this data. In case you need to increase security you should consider Liferay Enterprise Search to lock down the communication between Liferay and Elasticsearch.

Jan Verweij 2018-12-14T10:24:00Z
Categories: CMS, ECM

Liferay REST APIs Beta release

Thu, 12/13/2018 - 11:44

I’m happy to announce the release of the first beta for the new set of Liferay REST APIs. But before going into details, for those of you who might not be aware of this project, I’d like to give you some context.

Our vision

Over the last year and a half, we have been working on a new set of REST APIs for Liferay following industries best practices. The goal of these APIs is to allow developers to access and manage all the content stored in Liferay in an easy way, so you can focus on building great experiences and not having to struggle with how to get that content.

Our long-term goal is to satisfy the needs of the different Liferay users, covering all the important use cases by offering the needed features by each one of them. Before fulfilling the whole vision and with the feedback received from several customers that we interviewed, we decided to focus first on helping you deliver content to your end users.

What will you find on this beta

Now that we are all on the same page, let’s talk about what will you find in this first version of the APIs. The scenario that we had in mind while building them was to allow developers to build their own frontend to expose the content created and stored in Liferay. With that objective we prioritized the features to implement in order to support, among others, the following use cases:

  • Access published structured content. By doing so, you’ll be able to create for example an SPA to show the latest news from your company.
    • To help you find the right content, we’ve implemented several filtering and sorting options, such as: filter and sort by title, publishing date, tags, etc.
  • Access the documents and media repositories. Enrich your experiences by having at your disposal the whole media library that you are already benefiting from.
  • Access the blog entries that are already published. Access the content and expose it with the best appearance your developers can think of.

Apart from these three main use cases, we are making possible to access and perform actions for other types of information. To check the whole set of resources and actions available in this beta, see the first draft of the documentation and the Open API profile here: https://headlessapis.wedeploy.io/   How can I get access to the beta We hope that by this time you are eager to give it a try to these APIs, test them, check if they could fit in your projects and give us feedback to make them greater. To make it more convenient for you, we have prepared a docker image containing the latest Liferay DXP 7.1 version with the first beta installed. In doing so, we provide you with a full sandbox to play with, not affecting your current environments. In order to get the docker image and run the beta, follow the installation instructions:

  • Pre requirements: install Docker to run the beta image
  • Create the config file com.liferay.apio.architect.internal.application.ApioApplication-default.config. Add the following properties to the file:
oauth2.scopechecker.type="none" auth.verifier.auth.verifier.BasicAuthHeaderAuthVerifier.urls.includes="*" auth.verifier.auth.verifier.OAuth2RestAuthVerifier.urls.includes="*" auth.verifier.guest.allowed="true" Note that the last property (auth.verifier.guest.allowed) lets guests access public content via the APIs. To turn this off, set the property to false. Store this file under “/Users/liferay/Downloads/xyz123/files/osgi/configs” being “/Users/liferay/Downloads/” the folder where the docker image will be downloaded.
  • Execute the following command: "docker run -it -p 8080:8080 -v /Users/liferay/Downloads/xyz123:/etc/liferay/mount liferay/portal-snapshot:7.1.x-201812071242-af6321a" 
    • Substitute “/Users/liferay/Downloads/” with the folder where the docker image will be downloaded.
    • Note:  this command will download the docker image for you if it does not find it in your computer.
We know that some of you would like to test the APIs in your own environments so additionally, we will release the APIs as a marketplace app during January. We’ll make a separate announcement once the app is available. Be aware that this is a beta product so we strongly recommend to not use them for production projects and we expect several breaking changes before the final version is released in order to take into account your feedback. How to give feedback Once you had given them a chance (or while testing them) we would like you to tell us about the experience using them: have you found any error? do you miss a feature that you think is crucial? Just access the Liferay portal Jira project here (https://issues.liferay.com/projects/LPS/summary) and create an issue setting the component to “HeadlessCMS” (the most common issue types will be bug or feature request). We’ll give you an answer as soon as possible.
  •  
Pablo Agulla 2018-12-13T16:44:00Z
Categories: CMS, ECM

Exiting the JVM ... !@#$%

Wed, 12/12/2018 - 14:54

Today I experienced a great victory, but the day sure didn't start that way. In the days leading up to OSGI I spent a lot of time reading about it, listening to discussions, etc. I wanted to understand what the big deal was. Among the list of benefits, one stuck out most for me.

"OSGI does the dependency management, at runtime, for you. So if you have Module A that is dependent on Module B, and all of the sudden Module B vanishes, OSGI will make sure your environment remains stable by stopping any other modules (ie. Module A) that are dependent. This way, your user is not able to start interacting with A at the risk of 500 server app errors or worse."

WOW. I mean, I don't often experience this problem, but I like the idea of not having to worry about it! So when I started my journey with Liferay 7, I was keen to see this in action. I follow the rules. I'm one of those people. But from the beginning I would have some odd situations where if I removed the wrong module I would get the usual "Import-Package" error message etc, but I would also see this.

[fileinstall-/home/aj/projects/ml/liferay/osgi/modules][org_apache_felix_fileinstall:112] Exiting the JVM

I am on Linux. The process was still running, but the portal was not responsive. So the only way to fix this was to

  1. kill the process
  2. remove the module that caused the error (Module A)
  3. clear the osgi/state older
  4. start the server
  5. and redeploy in the right order

Ugh. And what a colossal waste of time. Over the past couple years I would ask people here and there about this error, maybe vent some frustrations, but everyone said the same thing -- "I've never seen that. I don't know what you are talking about". I figured -- pfft, mac users :)

But today, when it happened again I said to myself, there must be a reason -- so I started to search. As part of my search I took to slack. Piotr Paradiuk (who I had recently met in Amsterdam at DEVCON) was kind enough to rubber duck with me. If you don't know what Rubber Ducking is -- https://en.wikipedia.org/wiki/Rubber_duck_debugging Piotr (and David actually, in another chat medium) both had the same reply "never seen it before". But over the course of try this and do that, Piotr sent me a link to github (https://github.com/liferay/liferay-portal/blob/65059440dfaf2b8b365a20f99e83e4cdb15478aa/modules/core/portal-equinox-log-bridge/src/main/java/com/liferay/portal/equinox/log/bridge/internal/PortalSynchronousLogListener.java#L106) -- mocking my error message :). But. after staring at this file, muttering under my breath, something caught my eye. if ((throwable != null) && (throwable instanceof BundleException) && (_JENKINS_HOME != null)) { String throwableMessage = throwable.getMessage(); if (throwableMessage.startsWith("Could not resolve module")) { log.error(_FORMAT, "Exiting the JVM", context, throwable); System.exit(1); } } More specifically -- private static final String _JENKINS_HOME = System.getenv("JENKINS_HOME"); :| ... I have jenkins running on my machine. Could that be it? So I stopped my instance and ran -- unset JENKINS_HOME Restarted my portal and ran through the steps to try to reproduce. This time I get the Import-Package error stuff, but the JVM doesn't exit. :| So, the moral of the story. If you are like me and you want to have jenkins on your dev machine to play with and such -- virtualize it or run it in a docker container. Oh .. and add -- unset JENKINS_HOME ... to your LIFERAY_HOME/TOMCAT_HOME/bin/setenv.sh Andrew Jardine 2018-12-12T19:54:00Z

Categories: CMS, ECM

Liferay Portal CE Clustering Returns

Tue, 12/11/2018 - 16:35
Background

Last fall we reintroduced the option to use clustering in Liferay 7.0 by compiling a series of modules manually and including them in your project. We received a lot of feedback that this was a very cumbersome process and didn’t really provide the benefits we intended in bringing back clustering.  

Enable Clustering

Beginning with Liferay Portal 7.1 CE GA3, clustering can now be enabled just like previously in Liferay Portal 6.2 CE.  The official documentation covers the steps needed to enable clustering in GA3.  To enable clustering, set the property:

cluster.link.enabled=true

Successfully enabling clustering will result in the following message in the logs on startup:

------------------------------------------------------------------- GMS: address=oz-52865, cluster=liferay-channel-control, physical address=192.168.1.10:50643 ------------------------------------------------------------------- Other Steps We Are Taking for the Community

Restoring clustering is one of many steps we are taking to restore faith between Liferay and our community. Other initiatives to improve the developer experience with Liferay include: 

  • More GA releases of Liferay Portal CE to improve stability and quality
     
  • A single-core model for Liferay Commerce, making open source Liferay Commerce installations compatible with our subscription services
     
  • Better support for headless use of Liferay with front-end frameworks
     
  • Improved experience with documentation, including unifying all available documentation and best practices in a single place
     
  • Additional ways to use portions of Liferay Portal’s functionality (such as permissioning or forms) in non-Liferay applications (e.g. those built with SpringBoot)

We’ll be providing more information about each of these initiatives in the weeks to come. 

Conclusion

I want to thank you again for sticking with Liferay all these years as we’ve sorted out our business model (and sometimes made poor decisions along the way). I’m excited for the chance to invest in and grow our community in the coming months.  It is what sets us apart from the competition and is truly one of the most rewarding parts of what we do at Liferay.  We hope fully reintroducing clustering to be the first step towards achieving this goal and as always would love to hear what you think in the comments below.

Bryan H Cheung 2018-12-11T21:35:00Z
Categories: CMS, ECM

(Another) Liferay DEVCON 2018 Experience

Tue, 12/11/2018 - 13:36

What a trip. From the dynamic format of the Unconference, to the Air Guitar competition, the opportunity to speak, and a lens into what's coming in 7.2, it was by far the best event that I have ever been to. I've been to many conference, but my first developer conference. It was so good that I forced myself to find the time to document the exprerience to share it with all of you. 

Day 1 - The Arrival

I arrived the day before to try to cope with the time change. If you arrive early morning and are trying to overcome jet lag with activities, don't start with the river cruise. David N. warned me, but I didn't listen. The warm cabin, the gentle rocking of the boat, the quiet -- all that was missing was a lullaby. David was right, I fell asleep for most of that ride. 

Day 2 - Unconference

I now understand why the Unconference has both limited seating and why it sells out so quickly. At first it feels a little bit like group therapy; everyone sitting in a circle, cues around the room for you to share and not be afraid to share. That's how I felt at least. But the dynamic agenda was inspiring. The participants decide the days topics. Anyone, ANYONE, can stand up, go to the mic, and propose a topic. The topics are then sorted and organized until you end up with an agenda. The discussions are open forums. You can share something cool you have done, ask a question, gripe about something that is bugging you, whatever you want. It's the perfect format for developers; after all we love to argue and we love to boast. Best of all, the leads from the various Liferay teams are present so it's a great opportunity to have your voice heard. For example, I proposed and led a discussion with a topic "Documentation: What is missing?". Cody Hoag, who is the lead for the docs team, was there and shared with us some of the challenges his team faces. He also took notes of the complaints, suggestions or ideas the participants had to digest and share with his team. For me the best part of the Unconference was the opportunity to connect not just with other community members but also the Liferay people from various teams. With only a little over 150 people in attendance it was a nice intimate atmosphere -- perfect for learning and sharing. As if the day and the sessions weren't good enough, the community event that was hosted by Liferay that evening was a blast. If you have not been to an unconference before, I would highly recommend you attend the next one! I certainly plan to. 

Day 3 - DEVCON: Day 1

If first impressions really are the most important thing, then the events team from Liferay nailed it. The venue for this year’s conference was a theatre. The main room (the Red Hall) was amazing and it was one of the first times I found myself at a conference sitting in a seat that I was sure didn't double as a torture rack. I know how silly that sounds, but when you spend most of your day sitting and listening to others talk, a comfortable chair is pretty important! 

The conference kicked off and Liferay didn't waste any time talking about 7.1. I could barely hold back a grin as Brian Chan mentioned BBSes and IRC (items I had included in my own talk that was yet to come). The keynotes and the sessions for the day were awesome -- though I made the same mistake I always make of attending a workshop, with no laptop. The worst part is the walk of shame 20 minutes in as you try to make a quiet (read: impossible) exit from the front of the room to the back. All the sessions I attended were very well done and I can't wait to see the recordings so that I can watch several sessions I was not able to attend. Session aside, there was a very rewarding personal moment for me -- receiving a 2018 Top Contributor award. It wasn't my first award, but who doesn't like to be recognized for the work they do? Actually, the best part of the awards piece was probably the guy to my right who was "picking it up for a friend" ... uh-huh :). 

The after party was chalk full of fun, fun that included and Air Guitar competition. What did I learn? Liferay doesn't just hire some exceptional developers, they also have some impressive air guitar skills! The combination of some awesome old school rock mixed with some all-in passionate head banging air guitar displays makes me think that there are several engineers and community members that might have had aspirations of a different career path in their youth. As if a great party and atmosphere wasn't enough, I also managed to meet Brian Chan (Liferay's Chief Technical Architect and Founder) -- the guy that started it all. He actually thanked me for the work I have done with Liferay. That was a little shocking actually. I give back to Liferay and the community because I feel Liferay and the community has done so much for me! As with the community event I also met some great people from all over Europe and beyond. I even met a Siberian who's name escapes me at the moment, but the shot of vodka he made me do in order to gain passage through the crowd to go home does not. 

Day 4 - DEVCON: Day 2

Day 2 for me was D-Day. My session was right before lunch. Having rehearsed more times than I could remember, most of my nervousness was gone.  My topic was: Journey of a Liferay Developer - The Search for Answers. The content was a community focused session where I decided to share with the group how I got involved with Liferay, the challenges I have faced along the way and how I overcame those challenges. I was a little uneasy since I wasn't really sure anyone would really care to hear my story, but when it was all over, people clapped, so it couldn't have been THAT bad :). All of this was validation that giving back, the hard work to prep, the courage to stand up and share, was worth it. It's something that I would gladly do again and something I would suggest everyone try at least once in their life. 

After my session I shook hands with Bryan Cheung (CEO of Liferay) and had a conversation with him for a few minutes around some of the pieces I highlighted in my talk. This is one of the many things I love about Liferay -- the fact that you can connect with people in their organization at all levels. You think Mark Hurd or Larry Ellison would ever take the time to sit down with me? Maybe, but I doubt it. 

Just like Day 1, the sessions were once again amazing. I learned about tools that were coming to improve developers experiences, I learned about integrations between Liferay and other platforms like Hystrix, and I watched some of my friends presentations discussing their challenges, including how to navigate the rough waters of an upgrade. The day ended with an exciting roadmap session where Jorge highlighted some of the things that are coming in 7.2 which, for me, was really exciting to see some of the potential features coming down the line.

Apart from all this though, I was amazed at how, in just a couple of days, I had covered material that would have otherwise likely taken me weeks or possibly months to learn. With all this information I was anxious to get home to both improve my skills, but also to share my experiences with others (developers and clients) so that we can all get the most out of our Liferay investments.  

Day 5 - Home Again

On your way out, before you check that bag? make sure you have your headphones -- I didn't. The silver lining though was that it gave me time to reflect.  9 hours to sit and reflect to be more precise (that's A LOT of reflection).  I found myself dreaming of the next conference that I might attend and topics I might propose with the hope that Liferay would once again give me the opportunity to share. My goal now? Continue, catching some of the content I missed during the conference (https://www.liferay.com/web/events-devcon), connecting with as many people in the community as I can and to make my way to Boston in 2019. Hopefully this year goes well enough that I'll find myself back in Europe again next fall -- the question is, where? And will I see you there? Andrew Jardine 2018-12-11T18:36:00Z

Categories: CMS, ECM

Unconference 2018

Wed, 12/05/2018 - 09:50

//La versión española del artículo se puede encontrar aquí:  Unconference 2018. Unconference took place on November 6, in Pakhuis de Zwijger, previously at Liferay DevCon in Amsterdam. I've read about this sessions, but I've never taken part in one. Spoiler, I loved it If you have ever been taking part of one, you'll know Unconference agenda doesn't exist before it starts. If you haven't, I'm going to talk about Liferay Unconference 2018 to explain how it was. First of all, Olaf talked about different spaces (talks zones, lunch zone, toilets, etc.), lunchtime, organization, and, very important, about 4 principles an 1 law. 4 principles:

  • Whoever comes is the right people (every participant wants to be there)
  • Whatever happens is the only thing that could have (the important thing is what it's happening in that place and in that moment )
  • Whenever it starts is the right time (you don't have to wait for anything to start the session)
  • When it’s (not) ever it’s (not) over (you make the most of the session)
1 law:
  • Law of Two Feet: If at any time you are in a situation where you are neither learning nor contributing, then you can use your two feet to go to a more productive place. This law is important to understand that every participant is present voluntarily.
I would like to remember a sentence that is important to understand Unconference mindset "If nobody attends your session, you have two options. In one hand, you can look for another interesting session,  in the other hand, you can work in your topic because when you propose a session, you are booking one hour to work in that topic". At that moment we started to brainstorm and build out the day's agenda. We then write down topics on cards and choose the track. Finally, these session topics are coordinated into available time slots and organized to avoid topic repetition and allow attendees to join the sessions they believe are right for them. It's important to know, only who has proposed the sessions, can change the card to another slot. So, if you want to change it (because there is another session at the same time), you must talk to her/him.   10 11 12 13 14 15 16 AStaging 2.0 - change listLiferay in action at the university Rest/HATEOAS vs.GraphQLSennaJS/Module loader with other JS libsLiferay AnalyticsWhat extension point are you missing? BAPIO?Liferay performance with more than 1 million users Personalization - use cases & moreDDM, Forms & WorkflowSearch optimisation: boosting & filteringDXP Cloud C   How do you monitor your Liferay application & pluginsUsing multitenancy (more instances in one installtion) - a good idea?mongoDB & LiferayAudience Targeting to guest user: how to do it? D"Internal classes" - Why are some classes in the "internal" package not exported? How to customize?Container + Liferay: How to deploy or upgrade? Liferay intelliJ Plugin - Language Server ProtocolAdministration experiences GDPR EMigration experiences: LR 6.2 --> 7.0Liferay as a headless CMS - best practices Liferay + TensorFlowData integration, ETL/ESB | Experiences & methods integrating external systemsMobile with LiferayWorkspace: tips & tricks | How you use/extend workspace? Use plugins? FLiferay CommerceConfig & content orchestration - site initializers Your experience with Liferay + SSO DXP vs. CE  GReact?  Media/Video - server user upload/integrationDocumentation: What is missing?Making it easier for business users to build sites (modern site building)Liferay for frontend developers H        I share some notes of the sessions:
  • Migration experiences : LR 6.2 --> 7.0           
    • If you want to upgrade Liferay version and to migrate to open source base data, it's better to migrate base data version first.
  • Config & content orchestration - site initializers            
    • There are some implementations, one is created by Commerce team.
    • Liferay team is analyzing to include it to new versions.
  • Liferay + TensorFlow
    • It is a new functionality in 7.2. Now it's available in github.
    • This functionality is implemented for images but is available to assets.  
  • How do you monitor your Liferay application & plugins
    • Use jvm to check threads           
    • Java thread dump analyzer -> http://fastthread.io
  • Documentation: What is missing?
    • Slack vs forum – don't lose information 
  • DXP vs. CE   
    • We talked about the two versions (CE and DXP) and possibility we pay licenses for support and code of both versions are the same
  • Workspace: tips & tricks | How you use/extend workspace? Use plugins?          
    • In the upgrade process to 7 version, if you can't migrate all modules, they recommend migrating services.
    • compile only - instruction call to proof
I think Unconference is very useful for everyone. On one hand, Liferay staff can check their functionalities and get feedback, and on the other hand, we can talk about topics important to us with experts. To finish Unconference, we met in the plenary room and summarized the event. Some points are very interesting of unconference are:
  • Philosophy/Mentality 
  • Attitude (everyone wants to share, teach and learn)
  • You can propose topics easily
  • You have a lot of options to learn (too many experts). 
As I said at the beginning, Unconference has been a wonderful experience and I would like to come back again. Álvaro Saugar 2018-12-05T14:50:00Z
Categories: CMS, ECM