AdoptOS

Assistance with Open Source adoption

CMS

Asset Display Contributors in Action

Liferay - Sun, 10/21/2018 - 14:19

Display pages functionality in Liferay always was tightly coupled with the Web Content articles, we never had plans to support more or less the same technology for other types of assets even though we had many of these types: Documents & Media, Bookmarks, Wiki, etc... Even User is an asset and every user has corresponding AssetEntry in the database. But for Liferay 7.1 we decided to change this significantly, we introduced a new concept for the Display Pages, based on Fragments, very flexible, much more attractive than the old ones and...we still support only Web Content articles visualization :). Good news for the developers is that the framework is extensible now and it is easy to implement an AssetDisplayContributor and visualize any type of asset using our great display pages, based on fragments and in this article I want to show you how to do it on example.

Let's imagine that we want to launch a recruitment site, typical one with tons of job-offers, candidates profiles, thematic blogs etc. One of the main functionalities must be a candidate profile page - some sort of landing page with the basic candidate information, photo, personal summary, and skills. And this task can be solved using new Display Pages.

As I mentioned before, User is an Asset in Liferay and there is a corresponding AssetEntry for each User, it is good since for now, we support visualization only for the Asset Entries. To achieve our goal we need two things, first - an AssetDisplayContributor implementation for the User, to know which fields are mappable and which values correspond to those fields, and second - custom friendly URL resolver to be able to get our users profile page by a friendly URL with the user's screen name in it.

 Let's implement the contributor first, it is very simple(some repeated code is skipped, the full class available on GitHub):

@Component(immediate = true, service = AssetDisplayContributor.class) public class UserAssetDisplayContributor implements AssetDisplayContributor { @Override public Set<AssetDisplayField> getAssetDisplayFields( long classTypeId, Locale locale) throws PortalException { Set<AssetDisplayField> fields = new HashSet<>(); fields.add( new AssetDisplayField( "fullName", LanguageUtil.get(locale, "full-name"), "text"));   /* some fields skipped here, see project source for the full implementation */ fields.add( new AssetDisplayField( "portrait", LanguageUtil.get(locale, "portrait"), "image")); return fields; } @Override public Map<String, Object> getAssetDisplayFieldsValues( AssetEntry assetEntry, Locale locale) throws PortalException { Map<String, Object> fieldValues = new HashMap<>(); User user = _userLocalService.getUser(assetEntry.getClassPK()); fieldValues.put("fullName", user.getFullName());   /* some fields skipped here, see project source for the full implementation */       ServiceContext serviceContext = ServiceContextThreadLocal.getServiceContext(); fieldValues.put( "portrait", user.getPortraitURL(serviceContext.getThemeDisplay())); return fieldValues; } @Override public String getClassName() { return User.class.getName(); } @Override public String getLabel(Locale locale) { return LanguageUtil.get(locale, "user"); } @Reference private UserLocalService _userLocalService; }

As you can see, there are two main methods - getAssetDisplayFields which defines the set of AssetDisplayField objects, with the field name, label and the type (for the moment we support two types - text and image trying to convert to text all non-text values, like numbers, booleans, dates, and lists of strings) and getAssetDisplayFieldsValues which defines the values for those fields using specific AssetEntry instance.  There is a possibility to provide different field sets for the different subtypes of entities like we do it for the different Web Content structures, using the classTypeId parameter.

The second task is to implement corresponding friendly URL resolver to be able to get our profiles by users screen name. Here I'll show only the implementation of the getActualURL method of FriendlyURLResolver interface because it is the method that matters, but the full code of this resolver is also available in GitHub.

@Override public String getActualURL( long companyId, long groupId, boolean privateLayout, String mainPath, String friendlyURL, Map<String, String[]> params, Map<String, Object> requestContext) throws PortalException { String urlSeparator = getURLSeparator(); String screenName = friendlyURL.substring(urlSeparator.length()); User user = _userLocalService.getUserByScreenName(companyId, screenName); AssetEntry assetEntry = _assetEntryLocalService.getEntry(User.class.getName(), user.getUserId()); HttpServletRequest request = (HttpServletRequest)requestContext.get("request"); ServiceContext serviceContext = ServiceContextFactory.getInstance(request); AssetDisplayPageEntry assetDisplayPageEntry = _assetDisplayPageEntryLocalService.fetchAssetDisplayPageEntry( assetEntry.getGroupId(), assetEntry.getClassNameId(), assetEntry.getClassPK()); if (assetDisplayPageEntry == null) { LayoutPageTemplateEntry layoutPageTemplateEntry = _layoutPageTemplateEntryService. fetchDefaultLayoutPageTemplateEntry(groupId, assetEntry.getClassNameId(), 0); _assetDisplayPageEntryLocalService.addAssetDisplayPageEntry( layoutPageTemplateEntry.getUserId(), assetEntry.getGroupId(), assetEntry.getClassNameId(),   assetEntry.getClassPK(), layoutPageTemplateEntry.getLayoutPageTemplateEntryId(), serviceContext); } String requestUri = request.getRequestURI(); requestUri = StringUtil.replace(requestUri, getURLSeparator(), "/a/"); return StringUtil.replace( requestUri, screenName, String.valueOf(assetEntry.getEntryId())); }

The key part here is that we need to know which AssetDisplayPageEntry corresponds to the current user. For the Web Content articles, we have a corresponding UI to define Display Page during the content editing. In the case of User, it is also possible to create the UI and save the ID of the page in the DB but to make my example simple I prefer to fetch default display page for the User class and create corresponding AssetDisplayPageEntry if it doesn't exist. And at the end of the method, we just redirect the request to our Asset Display Layout Type Controller to render the page using corresponding page fragments.

That's it. There are more tasks left, but there is no need to deploy anything else. Now let's prepare the fragments, create a Display Page and try it out! For our Display Page, we need 3 fragments: Header, Summary, and Skills. You can create your own fragments with editable areas and map them as you like, but in case if you are still not familiar with the new Display Pages mapping concept I recommend you to download my fragments collection and import them to your site.

When you have your fragments ready you can create a Display Page, just go to Build -> Pages -> Display Pages, click plus button and put the fragments in the order you like. This is how it looks using my fragments: 

Clicking on any editable area(marked with the dashed background) you can map this area to any available field of available Asset Type(there should be 2 - Web Content Article and User). Choose User type and map all the fields you would like to show on the Display Page and click Publish button. After publishing it is necessary to mark our new Display Page as default for this Asset Type, this action is available in the kebab menu of the display page entry:

Now we can create a user and try our new Display Page. Make sure you specified all the fields you mapped, in my case the fields are - First name, Last name, Job title, Portrait, Birthday, Email, Comments(as a Summary), Tags(as Skills list) and Organization(as Company). Save the user and use it's screen name to get the final result:

It is possible to create a corresponding AssetDisplayContributor for any type of Asset and use it to visualize your assets in a brand new way using Fragment-based Display Pages.

Full code of this example is available here.

Hope it helps! If you need any help implementing contributors for your Assets feel free to ask in comments.

Pavel Savinov 2018-10-21T19:19:00Z
Categories: CMS, ECM

Fragments extension: Fragment Entry Processors

Liferay - Fri, 10/19/2018 - 00:02

In Liferay 7.1 we presented a new vision to the page authoring process. The main idea was to empower business users to create pages and visualize contents in a very visual way, without a need to know technical stuff like Freemarker or Velocity for the Web Content templates. To make it possible we introduced the fragment concept.

In our vision fragment is a construction block which can be used to build new content pages, display pages or content page templates. Fragment consists of HTML markup, CSS stylesheet, and Javascript code.

Despite the fact that we really wanted to create a business-user-friendly application, we always remember about our strong developers community and their needs. Fragments API is extensible and allows you to create your custom markup tags to enrich your fragments code and in this article, I would like to show you how to create your own processors for the fragments markup.

As an example, we are going to create a custom tag which shows a UNIX-style fortune cookie :). Our fortune cookie module has the following structure:

We use Jsoup library to parse fragments HTML markup so we have to include it into our build file(since it doesn't come within the Portal core) among other dependencies. 

sourceCompatibility = "1.8" targetCompatibility = "1.8" dependencies { compileInclude group: "org.jsoup", name: "jsoup", version: "1.10.2" compileOnly group: "com.liferay", name: "com.liferay.fragment.api", version: "1.0.0" compileOnly group: "com.liferay", name: "com.liferay.petra.string", version: "2.0.0" compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "3.0.0" compileOnly group: "javax.portlet", name: "portlet-api", version: "3.0.0" compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1" compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0" }

OSGi bnd.bnd descriptor has nothing special because we don't export any package and don't provide any capability:

Bundle-Name: Liferay Fragment Entry Processor Fortune Bundle-SymbolicName: com.liferay.fragment.entry.processor.fortune Bundle-Version: 1.0.0

Every Fragment Entry Processor implementation has two main methods, first one - to process fragments HTML markup and second - to validate the markup to avoid saving fragments with invalid markup.

/** * @param fragmentEntryLink Fragment Entry link object to get editable * values needed for a particular case processing. * @param html Fragment markup to process. * @param mode Processing mode (@see FragmentEntryLinkConstants) * @return Processed Fragment markup. * @throws PortalException */ public String processFragmentEntryLinkHTML( FragmentEntryLink fragmentEntryLink, String html, String mode) throws PortalException; /** * @param html Fragment markup to validate. * @throws PortalException In case of any invalid content. */ public void validateFragmentEntryHTML(String html) throws PortalException;

FragmentEntryLink object gives us access to the particular fragment usage on a page, display page or page template and can be used if we want our result to depend on the particular usage parameters. Mode parameter can be used to give additional processing(or remove unnecessary processing) options in the EDIT(or VIEW) mode.

In this particular case, we don't need the validation method, but we have a good example in the Portal code.

Let's implement our fortune cookie tag processor! The only thing we have to do here is to iterate through all fortune tags we meet and replace them with a random cookie text. As I mentioned before, we use Jsoup to parse the markup and work with the document:

@Override public String processFragmentEntryLinkHTML( FragmentEntryLink fragmentEntryLink, String html, String mode) { Document document = _getDocument(html); Elements elements = document.getElementsByTag(_FORTUNE); Random random = new Random(); elements.forEach( element -> { Element fortuneText = document.createElement("span"); fortuneText.attr("class", "fortune"); fortuneText.text(_COOKIES[random.nextInt(7)]); element.replaceWith(fortuneText); }); Element bodyElement = document.body(); return bodyElement.html(); } private Document _getDocument(String html) { Document document = Jsoup.parseBodyFragment(html); Document.OutputSettings outputSettings = new Document.OutputSettings(); outputSettings.prettyPrint(false); document.outputSettings(outputSettings); return document; } private static final String[] _COOKIES = { "A friend asks only for your time not your money.", "If you refuse to accept anything but the best, you very often get it.", "Today it's up to you to create the peacefulness you long for.", "A smile is your passport into the hearts of others.", "A good way to keep healthy is to eat more Chinese food.", "Your high-minded principles spell success.", "The only easy day was yesterday." }; private static final String _FORTUNE = "fortune";

 

That is it. After deploying this module to our Portal instance, fortune tag is ready to use in the Fragments editor:

It is up to you how to render your personal tag, which attributes to use, which technology to use to process tags content. You can even create your own script language, or apply the one which you already have in your CMS to avoid massive refactoring and use existing templates as-is.

Full Fortune Fragment Entry Processor code can be found here.

Hope it helps!

Pavel Savinov 2018-10-19T05:02:00Z
Categories: CMS, ECM

Using BOMs to Manage Liferay Dependency Versions

Liferay - Wed, 10/17/2018 - 15:15

Liferay is a large project, and many developers who are attempting to get their customizations to work with Liferay will often end up asking the question, "What version of module W should I use at compile time when I'm running on Liferay X.Y.Z?" To answer that question, Liferay has some instructions on how to find versions in its document, Configuring Dependencies.

This blog entry is really to talk about what to do in situations where those instructions just aren't very useful.

Path to Unofficial BOMs

First, a little bit of background, because I think context is useful to know, though you can skip it if you want to get right to working with BOMs.

Back in late 2016, I started to feel paranoid that we'd start introducing major version changes to packages in the middle of a release and nobody would notice. To ease those concerns, I wrote a tool that indexed all of the packageinfo files in Liferay at each tag, and then I loaded up these metadata files with a Jupyter notebook and did a check for a major version change.

Then, like many other is it worth the time problems, it evolved into a script that I'd run once a week, and a small web-based tool so that I wouldn't have to fire up Jupyter every time I needed to check what was essentially static information.

Fast forward to February 2017, and our internal knowledge base was updated to allow for a new wiki format which (accidentally?) provided support for HTML with script tags. So, I chose to share my mini web-based tool to the wiki, which then lead our support team in Spain to share a related question that they'd been pondering for awhile.

Imagine if you happened to need to follow the Liferay document, Configuring Dependencies, for a lot of modules. Doesn't that lookup process get old really fast?

So, given that it was clearly possible to create an unofficial reference document for every Liferay exported package, wouldn't it be nice if we could create an official reference document that recorded every Liferay module version?

Since I had all of the metadata indexed anyway, I put together a separate tool that displayed the information stored in bnd.bnd files at every tag, which sort of let you look up module version changes between releases. This let people get a sense for what an official reference document might look like.

(Note: I learned a year later that bnd.bnd files are not the correct file to look at if you want to know the version at a given tag. Rather, you need to look at the artifact.properties files saved in the modules/.releng folder for that information. So in case it helps anyone feel better, Liferay's release and versioning process isn't obvious to anyone not directly involved with the release process, whether you're a Liferay employee or not.)

From the outside looking in, you might ask, why is it that our team didn't ask for Liferay to officially provide a "bill of materials" (BOMs), as described in the Introduction to Dependency Mechanism in the official Maven documentation? That way, you'd only specify the version of Liferay you're using, and the BOM would take care of the rest. If such a BOM existed, a lookup tool would be completely unnecessary.

Well, that's how the request actually started at the time of DXP's release, but since it wasn't happening, it got downgraded to a reference document which looked immediately achievable.

Fast forward to today. Still no official reference document for module versions, still no official Liferay BOMs.

However, by chance, I learned that the IDE team has been evaluating unofficial BOMs currently located on repository.liferay.com. These BOMs were generated as proof of concepts on what that BOM might include, and are referenced in some drafts of Liferay IDE tutorials. Since I now had an idea of what the IDE team itself felt a BOM should look like, I updated my web-based tool to use all of the collected metadata to dynamically generate BOMs for all indexed Liferay versions.

Install the Unofficial BOM

For sake of an example, assume that you want to install release-dxp-bom-7.1.10.pom.

The proof of concept for this version exists in the liferay-private-releases repository of repository.liferay.com, and Liferay employees can setup access to that repository to acquire that file. Since there are no fix packs, it is also functionally equivalent to the original 7.1.0 GA1 release, and you can use com.liferay.portal:release.portal.bom:7.1.0 instead.

However, if you wish to use a version for which a proof of concept has not been generated (or if you're a non-employee wanting to use an unofficial DXP BOM), you can try using Module Version Changes Since DXP Release to use the indexed metadata and generate a BOM for your Liferay version. If you go that route, open up the file in a text editor, and you should find something that looks like the following near the top of the file:

<groupId>com.liferay</groupId> <artifactId>release.dxp.bom</artifactId> <version>7.1.10</version> <packaging>pom</packaging>

With those values for the GROUP_ID, ARTIFACT_ID, VERSION, and PACKAGING, you would install the BOM to your local Maven repository by substituting in the appropriate values into the following mvn install:install-file command:

mvn install:install-file -Dfile=release.dxp.bom-7.1.10.pom -DgroupId="${GROUP_ID}" \ -DartifactId="${ARTIFACT_ID}" -Dversion="${VERSION}" \ -Dpackaging="${PACKAGING}"

And that's basically all you need to do when installing a BOM that's not available in any repository you can access.

Install Multiple BOMs

If you only have a handful of BOMs, you could repeat the process mentioned above for each of your BOMs. If you have a lot of BOMs to install (for example, you're a Liferay employee that might need to build against arbitrary releases, and you decided to use the earlier linked page and generate something for every Liferay fix pack), you may want to script it.

To keep things simple for pulling values out of an XML file, you should install the Python package yq, which provides you with the tool xq, which provides you with XML processing at the command-line. This tool is similar to the popular tool jq, which provides you with JSON processing at the command line.

pip install yq

Once yq is installed, you can add the following to a Bash script to install the auto-generated BOMs to your local Maven repository:

#!/bin/bash install_bom() { local GROUP_ID=$(cat ${1} | xq '.project.groupId' | cut -d'"' -f 2) local ARTIFACT_ID=$(cat ${1} | xq '.project.artifactId' | cut -d'"' -f 2) local VERSION=$(cat ${1} | xq '.project.version' | cut -d'"' -f 2) local PACKAGING=$(cat ${1} | xq '.project.packaging' | cut -d'"' -f 2) mvn install:install-file -Dfile=${1} -DgroupId="${GROUP_ID}" \ -DartifactId="${ARTIFACT_ID}" -Dversion="${VERSION}" \ -Dpackaging="${PACKAGING}" echo "Installed ${GROUP_ID}:${ARTIFACT_ID}:${VERSION}:${PACKAGING}" } for bom in *.pom; do install_bom ${bom} done Use BOMs in Blade Samples

In case you've never used a BOM before, I'll show you how you would use them if you were to build projects in the Blade Samples repository.

Reference the BOM in Maven

First, update the parent pom.xml so that child projects know which dependency versions are available by simply adding the BOM as a dependency.

<dependencyManagement> <dependencies> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>release.dxp.bom</artifactId> <version>7.1.10</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>

We set this to be an import scope dependency, so that we don't have to download all of Liferay's release artifacts just to have the version numbers (they will be downloaded as-needed when the specific artifacts are resolved).

Then, in order to have a child project use this version, simply update the pom.xml in the child project to not include the version explicitly for any of the dependencies.

<dependency> <groupId>com.liferay.portal</groupId> <artifactId>com.liferay.portal.kernel</artifactId> <scope>provided</scope> </dependency>

As noted in Introduction to Dependency Mechanism, the version specified in the parent POM will then be chosen as the dependency version. Running mvn package in the child project will then download the actual versions for the noted Liferay release.

Note: If this is the first time you've run any Maven commands in the liferay-blade-samples repository, you'll want to make sure that all of the parent projects are installed or the build will fail. If you are new to Maven and aren't sure how to read pom.xml files, this is achieved with the following steps:

  1. Run mvn -N install in liferay-blade-samples/maven
  2. Run mvn install in liferay-blade-samples/parent.bnd.bundle.plugin.
Reference the BOM in Gradle

Liferay workspace uses an older version of Gradle, and so BOMs aren't supported by default. To get support for BOMs, we'll first need to bring in io.spring.gradle:dependency-management-plugin:1.0.6.RELEASE.

The first step in doing this is to update the parent build.gradle so that Gradle knows where to find the plugin.

buildscript { dependencies { ... classpath group: "io.spring.gradle", name: "dependency-management-plugin", version: "1.0.6.RELEASE" } ... }

The next step in doing this is to update the parent build.gradle so that Gradle makes sure to apply the plugin to each child project.

subprojects { subproject -> ... apply plugin: "io.spring.dependency-management" ... }

Because we're installing the BOMs to our local Maven repository, the next step in doing this is to update the parent build.gradle so that Gradle knows to check that local repository. We can then also add the BOM to a dependencyManagement block.

subprojects { subproject -> ... repositories { mavenLocal() ... } dependencyManagement { imports { mavenBom "com.liferay.portal:release.dxp.bom:7.1.10" } } ... }

Then, in order to have a child project use this version, simply update the build.gradle in the child project to not include the version explicitly for any of the dependencies.

dependencies { compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel" ... } Minhchau Dang 2018-10-17T20:15:00Z
Categories: CMS, ECM

We want to invite you to DEVCON 2018

Liferay - Tue, 10/16/2018 - 10:04

Every year we, the developers doing amazing things with Liferay's products, have this unique opportunity to meet, learn and enjoy those long technical discussions with each other. All that happens at DEVCON - our main developer conference which is taking place in Amsterdam from November 6th to 8th. This year's agenda is filled with sessions delivering in depth technical details about the products, presenting  new technologies and showcasing spectacular use cases.

Following the tradition, DEVCON starts with an un-conference, the day before the main conference. That is a full day consisting solely of the best parts of any traditional conference - the meetings and conversations at the halls between the talks. It's a day full of discussions with experts and colleagues on topics that attendees bring in.

We try to keep the prices for DEVCON at reasonable level and provide several kind of promotions for partners and organizations we have business relationships with. Yet there are talented developers in our community who are working alone or for non-profit organizations or in not so well developed parts of this world or for whatever reason can not afford a DEVCON ticket. This year we want to help some of them. 

We have free DEVCON tickets to give away!

As much as we would love to invite the whole community, we have to live by the market rules! So we only have limited number of tickets. To help us decide, please send an email to developer-relations@liferay.com with "Free DEVCON ticket" subject and tell us why you think you should be one of the people we give that free ticket to. We will decide between those that have the most convincing, creative and fun reasons.

See you in Amsterdam!

David Gómez 2018-10-16T15:04:00Z
Categories: CMS, ECM

Gradle Plugin for manage properties files

Liferay - Mon, 10/15/2018 - 13:01

GitHub repositoty: https://github.com/ironcero/devtools-liferay-portal-properties

Summary

This gradle plugin let you manage your properties files on your liferay workspace (version 7.0 and 7.1).

Blade tool initializes this kind of workpace with one folder named configs. There are some folder into configs folder:

  • common
  • dev
  • local
  • prod
  • uat

It's very common that you need to keep different values for the same properties depends on your environment. This plugin try to help you to manage this settup: copying all properties files from one common folder to each environment folder and replacing all properties found in filters file to the correct value.

How to use

First you will need the plugin jar file. You could download latest version from https://github.com/ironcero/devtools-liferay-portal-properties/blob/master/release/portal-properties-1.0.0.jar (Maven Central Version coming soon) or download source code from this github and to compile it. If you download jar file you will need move this to correct path in your local repository (gradle coordenates are devtools.liferay:portal-properties:1.0.0). Else if you download source code and compile it you will need to execute install maven task to install jar file on correct path in your local repository.

After jar file is fetched you will need to set up your liferay workspace. You will need to create two newely folder. You can create these folder in path you want but we recommend created into common folder (in configs folder).

Now you will need to set up this plugin in your build.gradle file. You will need add these line to build.gradle file:

buildscript { dependencies { classpath group: "devtools.liferay", name: "portal-properties", version: "1.0.0" } repositories { mavenLocal() maven { url "https://repository-cdn.liferay.com/nexus/content/groups/public" } } } apply plugin: "devtools-liferay-portal-properties" buildproperties { descFolderPath = 'configs' originFolderPath = 'configs/common/origin' keysFolderPath = 'configs/common/keys' } build.finalizedBy(buildproperties)

In this example we're going to use configs/common/origin folder to keep original properties file with pattern, and configs/common/keys folder to keep different values for properties. In details:

  • Dependencies: Gradle coordenates of DevTools Liferay Portal Properties is devtools.liferay:portal-properties:1.0.0.
  • Repositories: you will need mavenLocal repository because you've moved plugin jar file to your maven local repository.
  • Apply plugin: DevTools Liferay Portal Properties plugin id is devtools-liferay-portal-properties.
  • BuildProperties: In this section we will put all configuration parameters. In 1.0.0 release we have:
    • descFolderPath: Path where properties file will be copied and properties will be replaced.
    • originFolderPath: Location of original properties file (with ${} filter params).
    • keysFolderPath: Location of filter properties file.
  • build.finaluzedBy: With this command we can execute this plugin on build stage and not only on buildproperties.

It's time to add your properties files.

In the example we've created 4 filter file on keysFolderPath folder (configs/common/keys):

  • dev.properties
  • local.properties
  • prod.properties
  • uat.properties The content of these files are very similar (local.properties):
test1=Local

File name (without .properties extension) must be equals to environment folder on descFolderPath folder.

In the example we've created only one properties file on originFolderPath folder (configs/common/origin). But we'ld put more properties files and all of then would be copied and replaced. portal-ext.properties on configs/common/origin:

testKey=testValue test1Key=${test1}

Now you are be able to generated your portal-ext.properties filtered by environment with buildproperties gradle task, or standar build gradle task.

gradle buildproperties gradle build

This is a common log of process:

:buildproperties Build properties task...configs Settings: destination folder path: configs origin folder path: configs/common/origin keys folder path: configs/common/keys Parsing dev environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\portal-ext.properties to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\dev WARNING: Property not found in file portal-ext.properties on dev folder (${test1}) WARNING: Property not found in file portal-ext.properties on dev folder (${test2}) Parsing local environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\portal-ext.properties to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\local Parsing prod environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\portal-ext.properties to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\prod WARNING: Property not found in file portal-ext.properties on prod folder (${test1}) Parsing uat environment... Copying C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\common\origin\portal-ext.properties to C:\dev\workspaces\devtools\liferay\portal-properties-test\liferay-workspace\configs\uat WARNING: Property not found in file portal-ext.properties on uat folder (${test1}) BUILD SUCCESSFUL Total time: 0.275 secs

You will see WARNING log when you have some properties on your original properties files and you haven't filter for these properties on your filter properties files.

You could review Liferay Test project in https://github.com/ironcero/devtools-liferay-portal-properties/tree/master/testProject/liferay-workspace

Ignacio Roncero Bazarra 2018-10-15T18:01:00Z
Categories: CMS, ECM

Joomla 3.8.13 Release

Joomla! - Tue, 10/09/2018 - 08:45

Joomla 3.8.13 is now available. This is a security release for the 3.x series of Joomla which addresses 5 security vulnerabilities.

Categories: CMS

Liferay Screens meets React Native, the sequel

Liferay - Mon, 10/08/2018 - 11:01

First of all, for those of you who don't know about Liferay Screens. Liferay Screens is a component library based on components called Screenlets. A screenlet is a visual component that you insert into your native app to leverage Liferay’s content and services, allowing you to create complex native applications for iOS and Android very fast. Awesome, isn’t it?

BUT, Do you need to create the SAME application for iOS and Android, with the SAME features twice? Ok, with screenlets it does not take too much time because most of the boring logic is encapsulated inside the screenlet and you only need to connect the dots. But it could be fantastic to have only one project and share the code of the two platforms.

How can we make this possible? Have you heard about React Native?

Goals

As you may know, React Native is a framework that allows you to create native applications (Android and iOS) in javascript using React. This avoids the necessity of having to maintain two different codebases, one per platform. It’s based on components, so the screenlets concept suits very well in React.

Long time ago, when ReactNative was released, we made a first proof of concept with some of the screenlets available at that moment. Now, we have came back to this idea and we have made another proof of concept. This one will feature all our brand new and more complex screenlets and, yes, Android is supported too.

With this prototype we aim to provide a solution to make mobile apps development faster (yeah, even more!) with React Native, so we could use the screenlets the same way you would use any react native component, like a Button component. Great! Do you want to see how it works? Take a look of the next video, it shows you how to use our library in React Native.

As you can see in the video, the use of screenlets from React Native are very easy. You only have to instantiate the screenlet that you want to use, give it a style with height and width because otherwise the screenlet will not show; and if you consider it appropriate handle the events that the screenlet will send.

To handle an event you have to specify a callback function that manage the mentioned event. E.g., in LoginScreenlet you can handle the event onLoginSuccess to handle when the user log in correctly.

Of course, the attributes (known as props in React) of the screenlets depends on the screenlet that will use, so some screenlets will have required attributes, e.g., the UserPortraitScreenlet needs the userId attribute.
To use all of this functionality in your react project, you have to configure your project following the steps of this video. Also, in the project’s README you can find a description of main steps to configure your react native project.

What is the status of the project?

For now this is a prototype. Even so,  ALL screenlets are available in React Native. In total, we have 21 screenlets in Android and 22 in iOS (the fileDisplayScreenlet is only available from iOS). To play with them, we recommend use the most common screenlets, like ImageGalleryScreenlet which show an image gallery, the UserPortraitScreenlet, the CommentListScreenlet to show a comments list of an asset and, of course, the LoginScreenlet, but you can use whatever you want.
So you can explore and tinker with them. Here you have the project.

How it works ?

We don’t want to bore you with technical details. Basically explained, we made a bridge, we built one side of the bridge in the native part, and the other side in the React Native part so it allows communication between them and render the screenlets.

What now?

Well, now it depends on you. You have the project to play with. We are open to suggestions and feedback. Honestly, we are very happy with the result for now.

Thanks for reading.

Luismi.

Luis Miguel Barco 2018-10-08T16:01:00Z
Categories: CMS, ECM

Listing out context variables 

Liferay - Mon, 10/08/2018 - 04:21
What's Context Contributor?

While developing a theme I was thinking how can I know all the variables that I can access in my free marker templates? Won't it be great if I can write a utility program which can list out all the variables and objects that can be accessed from our theme or other templates files? Context Contributor concept can be used for this purpose. Actually using Context Contributor we can write a piece of code to inject contextual information which can be reused in various template files. If you are not aware of context contributor please visit my article http://proliferay.com/context-contributor-concept-in-liferay-7dxp/ . 

How to create context contributor?

Using the Liferay IDE we can create the project structure. 

 

The Code: 

Our context contributor class has to implements TemplateContextContributor interface and we need to implement the below method. 

    public void prepare(Map<String, Object> contextObjects, HttpServletRequest request) {
    //The fun part is here 
    }

If we see the above code, the first parameter contextObjectcts is the map which contains all the contextual information as key-value pairs. We can iterate all the items of the map and write it to a file. Here is the complete code of the method.  This method writes a file in my D drive with a file name all-variables.html.Of course, you can change it the way you want. 

@Override
    public void prepare(
        Map<String, Object> contextObjects, HttpServletRequest request) {
        PrintWriter writer;
        try {
            writer = new PrintWriter("D:\\all-variables.html", "UTF-8");
            StringBuffer stb = new StringBuffer();
            stb.append("<html>");
            stb.append("<body>");
            stb.append("<table border=\"1\">");
            stb.append("<tr>");
            stb.append("<th>Variable Name</th>");
            stb.append("<th>Variable Value</th>");
            stb.append("</tr>");

            for (Map.Entry<String, Object> entry : contextObjects.entrySet()) {
                stb.append("<tr>");
                stb.append("<td>"+entry.getKey()+"</td>");
                stb.append("<td>"+entry.getValue()+"</td>");
                
               // System.out.println(entry.getKey() +" = =  "+ entry.getValue());
                stb.append("</tr>");
            }
            
            stb.append("</table>");
            stb.append("</body>");
            stb.append("</html>");
            
            writer.println(stb.toString());
            writer.close();
        } catch (FileNotFoundException | UnsupportedEncodingException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }    
    }

You just deploy the module and access any pages. This code will be executed and your file is ready. Now you have all the contextual information which can be used the theme development as well as writing ADT. 

The output of the code:

 

Have fun... Hamidul Islam 2018-10-08T09:21:00Z
Categories: CMS, ECM

Why I'm Flying South to LSNA2018

Liferay - Sat, 10/06/2018 - 20:10
or, How to blow a Saturday evening writing a blog post just because  the kids don't want to hang out with you

 

Here are the 5 reasons I am flying down tomorrow evening.

I've been to LSNA twice before, in 2015 and 2016. I remember the energy and the ambience. Some of the topics deserve waaaay more than the 30 minutes or hour that is allocated to them, but then the presentations are designed to leave you with just enough to take a deep dive on whichever topic interests you, and on that front, they absolutely deliver. So, here's to more of that.

Unconference. I've never attended one of these before, but the prospect has me interested. I mean, it looks like the attendees get to carve out the day's agenda. I'll be bringing my list of topics fwiw. Something tells me there'll be enough knowledge sharing to go all around. 

Speed consulting. I hope I get to reserve a spot. I have a half-baked design approach around a SAML-authentication requirement using the SAML connector plugin - just a lot of holes that need plugging. Hoping a 20-minute session will help clear things up for me.

Agenda topics: As always, great spread! Here's the top 5 items on my radar at this time:

  • Search (Oct 9, 10:35)
  • Securing your APIs for Headless Use (Oct 10, 11:00)
  • Extending Liferay with OSGI (Oct 9, 4:30)
  • Liferay Analytics Cloud (Oct 10, 10:20)
  • Building Amazing Themes (Oct 9, 3:50)

Food. I have on my to-do list to eat a bowl of authentic étouffée. I will have to seek out the best place for this.

Javeed Chida 2018-10-07T01:10:00Z
Categories: CMS, ECM

Adding 2FA to Liferay DXP 7.1

Liferay - Wed, 10/03/2018 - 12:09

We recently had a requirement to add 2 Factor Authentication support for a demo, so I am pleased to share our implementation with the community.

 

Login

On login the user sees a new 'Authenticator Code' field below Password:

 

 

The user populates their credentials, launches Google Authenticator app (or other 2FA app) on their phone and gets their code:

 

 

The user enters it on screen, clicks Sign In and hey presto, they have logged in with 2FA.

 

User setup

QR Codes are used to share the profile details with the end user:

 

 

These are shared with the end user by email, and for convenience  (e.g. for Demos & testing) the QR Code is available through the Liferay profile screens (on the Password tab):

 

 

Rollout

To simplify rollout:

  • QR Codes used to configure the 2FA app. (Alternatively the user can manually configure the 2FA app.)
  • Users created after the full set of application modules are deployed will automatically be assigned a Secret Key on account creation and will be emailed a link to the QR Code.
  • There is an optional activator bundle that will generate Secret Keys and email QR Codes to all users.
  • Administrators can bypass 2FA and a custom User Group can be created to allow certain users to bypass 2FA if required.

 

Source & Documentation

The source is available here: https://github.com/michael-wall/liferay-2fa including a readme with deployment steps and more information on configuration, limitations (e.g. storing Secret Keys in plain text) etc.

Michael Wall 2018-10-03T17:09:00Z
Categories: CMS, ECM

Upgrade WURFL's database into Liferay Mobile Device Detection Lite 

Liferay - Tue, 10/02/2018 - 10:02
Introduction

If you're reading this post is because you need to know which device currently access on your Liferay through Liferay Mobile Device Detection Lite. Specially, you can not explain why Liferay detects a different version of your modern, cool and super updated device!

Don't worry! I try to explain you what to do.

WURFL's database

Before to explain,  do you know the WURLF's database? If you don't, you can see this shortly video!

In order to detect your device, you already know to download and install the Liferay Mobile Device Detection Lite from marketplace

This app contain a WURLF's database prepopulated inside the bundle through an external service called 51Degrees. This database is populated only during the build of bundle and not at runtime.

processResources { into("META-INF") { from { FileUtil.get(project, "https://github.com/51Degrees/Device-Detection/raw/master/data/51Degrees-LiteV3.2.dat") } } }

code from build.gradle of this app

The result was a 51Degrees.dat file inside the META-INF folder and, as you can image, this file is the engine of the device detection process.

Currently the last release (build) of Liferay Mobile Device Detection Lite was one year ago on marketplace and now the devices data are very old.

How to upgrade this WURFL's database

You can see, on the following image, the configuration of Liferay Mobile Device Detection Lite (51Degrees Device Detection) and how is linked the WURLF's database.

Unfortunately this configuration check on file system only inside your bundle and we can't link an URL or set an absolute path of this data file put in other places.

The only way to add or replace files inside existing bundle is a fragment. Now we are use this way in order to add a new WURFL's database.

You can check here my project on GitHub where I have put the file under META-INF folder and through the bnd file we explain to Liferay to "put" this file inside the original bundle.

Fragment-Host: com.liferay.portal.mobile.device.detection.fiftyonedegrees;bundle-version="1.0.9"

At the end we can change the configuration and link the new WURLF's database and restart the server.

Conclusion

This database is not updated daily and you can check here the update status of this file. When you try to add a new file don't use the same filename but change it.

Davide Abbatiello 2018-10-02T15:02:00Z
Categories: CMS, ECM

The Future of Java and How It Affects Java and Liferay Developers

Liferay - Wed, 09/26/2018 - 05:25

Java 11 has just been released (on Sep, 25th) and it comes with the consolidation of a series of changes, not only to the language and the platform, but to the release and support model that has lead to some noise on the future of Java.

Probably the two most notable concerns are the end of public updates for Java 8 and the uncertainty of the rights to use Oracle JDK without paying for commercial support.

Although it is true that with the new changes, Oracle is going to put focus on only the latest java version, and will offer commercial support for its JDK, it is also true that we -as Java and Liferay developers- will still be able to use Java and the JDK freely.

 

The changes in the release cadence and model

In 2017, it was already announced that Java was going to move faster, scheduling a new feature release every six months, on March and September. That meant that after Java 9, released on September 2017, Java 10 was going to be released on March 2018 and java 11 on September 2018, which just has happened.

The second big change has been the introduction of the concept of LTS (Long Time Support) versions, which are versions that are 'marked' to be maintained for more than six months. And this mark is not a compromise from Oracle, but a recommendation for the industry and community.

On the other side, the differences between Oracle JDK and OpenJDK have been eliminated. In fact, Oracle is leading the work on OpenJDK LTS code base during the six first months after the release. This makes OpenJDK the new default. 

After that, Oracle will provide updates for their Oracle JDK only to those customers that have a commercial license. But at the same time, Oracle will allow and encourage other vendors (like IBM, RedHat, Azul or the community based AdoptOpenJDK) to work on the OpenJDK LTS codebase to keep providing updates.

That means that Oracle will provide free updates for every Java version during the first six months after release, and other vendors and community initiatives will provide free updates for LTS versions for a longer period.

 

Will Java 8 still be freely available?

Java 8 was a LTS, so it is replaced by Java 11, which is also a LTS. And that means that oracle has announced that OpenJDK 8 will end its official support for commercial use in January 2019.

But the good news is that Red Hat has already applied to lead the development and updates of OpenJDK 8 after that date, and other companies like Amazon, Azul Systems or IBM have also announced that they will support Red Hat.

So we will actually have free Java 8 updates at least until September 2023, based on OpenJDK.

 

In conclusion

Although Oracle is focusing their effort on the six month release, there is still support for free updates for the LTS versions of Java, first provided by Oracle and, after that, maintained and updated by other vendors which will offer free updates and, in some cases, also will offer commercial support.

If you want to dig a little bit more on the details of all these changes, there is a comprehensive document with the title "Java is Still Free" written and updated by the community of Java Champions that has a lot of details this topic, and includes and updated table with the plans for support and updates, which so far, is as follows:

And for Liferay, we will also pay attention to this changes and the plans to support the different versions of Java in order to update our Liferay JDK Compatibility Support accordingly. 

David Gómez 2018-09-26T10:25:00Z
Categories: CMS, ECM

How JDK11 Affects Liferay Portal/DXP Users

Liferay - Tue, 09/25/2018 - 16:20

With the release of JDK11, Oracle's new Java SE Support Policy (and here) brings sweeping changes to the Java Enterprise community.

If you would like a good explanation of the changes to come, I highly recommend this video.

Here are my thoughts on how some of these changes will affect Liferay Portal/DXP users:

Starting with JDK11, you will no longer be able to use Oracle JDK for free for commercial purposes.

All commercial projects will require a subscription/service from Oracle to get the Oracle JDK.  The majority of Liferay users are commercial users who deploy on Oracle JDK.  If you do not pay for support from Oracle for their JDK or one of their other products such as Web Logic, you will need to make a decision on whether you wish to continue to use Oracle JDK.

An OpenJDK binary is now your free option

Oracle will continue to provide a free alternative with their Oracle OpenJDK binary.  There will also be others such as Azul, RedHat, IBM, and AdoptOpenJDK which will also provide their own binaries.  For now, Liferay has only certified, Oracle's OpenJDK binary.  We have to yet to determine whether all OpenJDK libraries can fall under the same name or if we need to certify them individually.

A new JDK will be released every 6 months and some of them will be marked as LTS release.

Prior to JDK9, all JDK's were essentially LTS releases.  We were able to receive years of bug fixes before we had to worry about a new release.  We will now see a new JDK release every 6 months (March, September).

As of now, Liferay has decided we will not certify every single major release of the JDK.  We will instead choose to follow Oracle's lead and certify only those marked for LTS.  If you have seen our latest compatibility matrix, you will notice that we did not certify JDK9 or JDK10.  We will instead certify JDK11 and JDK17 and JDK23 as those have been the ones marked as LTS.  This is subject to change.

Oracle will only guarantee 2 updates per major release

JDK8 has received 172 updates so far. In contrast, JDK9, the first release that Oracle implemented this policy, had 4 updates, while JDK10 only got the minimum 2 updates. Although JDK11 is designated as an LTS release, there is no guarantee of more than 2 updates from Oracle.

We will have to wait until JDK12 is released to see what will happen with JDK11.  The optimistic side of me feels that the Java open source community will continue to backport security issues and bugs well after Oracle decides to stop.  We will have to wait and see.

January 2019 will be the last free public update for JDK8. 

If you are a Liferay user and you have not made a plan for your production servers, please start!

I will provide the paths available currently but these are in no way recommendations provided by Liferay.  It is up to you to make the best decision for your own company.

  • Continue to use Oracle JDK8 without any future patches
  • Continue to use Oracle JDK8 and pay for a subscription
  • Switch to Oracle JDK11 and pay for a subscription
  • Switch to Oracle OpenJDK11 binary (knowing that you will have to make this decision again in 6 months)
  • Switch to a certified version of IBM JDK.

I will try to update this list as more options become available i.e. we decide to certify AdoptOpenJDK, Azul Zulu, RedHat JDK.

I am eager to see how the rest of the enterprise community reacts to these changes. 

Please leave a comment below with any thoughts on Oracle's changes or suggestions on what you would like to see Liferay do in regards to JDK support.

David Truong 2018-09-25T21:20:00Z
Categories: CMS, ECM

Red alert, shields up - The work of the Joomla Security Team

Joomla! - Tue, 09/25/2018 - 04:00

A CMS-powered website has all the ingredients for an IT security nightmare: it is publicly accessible, it’s running on powerful machines with great connectivity and the underlying system is used countless times around the globe, making it an attractive target for attackers.
The Joomla Security Strike Team (JSST) is working hard to make sure that this nightmare doesn’t become reality for Joomla users!

Categories: CMS

Red alert, shields up - The work of the Joomla Security Team

Joomla! - Tue, 09/25/2018 - 04:00

A CMS-powered website has all the ingredients for an IT security nightmare: it is publicly accessible, it’s running on powerful machines with great connectivity and the underlying system is used countless times around the globe, making it an attractive target for attackers.
The Joomla Security Strike Team (JSST) is working hard to make sure that this nightmare doesn’t become reality for Joomla users!

Categories: CMS

Lines, Splines, and Bars, Oh My!

Liferay - Thu, 09/20/2018 - 13:59

Charts are great visual aids. They're crucial to digesting data and information quickly. Prior to Liferay Portal 7.1, we didn't have a way to include charts in our apps. No more! Now you can use charts out-of-the-box, thanks to the Chart taglib. Whether it's bar charts, line charts, pie charts, scatter charts, spline charts, or even donut charts, the Chart taglib has you covered. I like a good old fashioned pie myself: Apple, Blueberry, Pumpkin, you name it. They're delicious, and so are these charts.

This blog post gives an overview of how charts work in Liferay and links to our official documentation, so you can dive deeper and go chart crazy!

How Does it work?

Using the Chart taglib is pretty straight forward: You provide the data, and then configure the chart to digest the data in the JSP. Data can be written in a Java class, a JSON file, an array of numbers, etc. 

A Few Good Examples

Alright, so you have an idea of how charts work in Liferay, but what about an actual example? No problem. Let's start with the oh so delicious pie chart.

Pie Chart Example

Pie charts display percentage-based slices that represent data. This example sets up some sample data via a Java class, and then feeds that data into the Pie chart tag's config attribute.

Java Sample Data:

public class ChartDisplayContext { public pieChartConfig getPieChartConfig() {   PieChartConfig _pieChartConfig = new PieChartConfig();   _pieChartConfig.addColumns(   new SingleValueColumn("data1", 30),   new SingleValueColumn("data2", 70)   ); return _pieChartConfig; } }

JSP:

<chart:pie config="<%= chartDisplayContext.getPieChartConfig() %>" id="pie" />

The resulting Pie chart:

As, you can see, configuring the chart is fairly easy. Now, let's take a look at a combination chart.

Combination Chart Example

Combination charts let you visualize multiple types of data in one chart. Simply specify the representation type of each data set. The example below also makes use of grouping. Data 1 and Data 2 are grouped together within the same bar.

Java sample data:

public class ChartDisplayContext { private CombinationChartConfig getCombinationChartConfig() {   CombinationChartConfig _combinationChartConfig = new CombinationChartConfig(); _combinationChartConfig.addColumns( new TypedMultiValueColumn( "data1", Type.BAR, 30, 20, 50, 40, 60, 50), new TypedMultiValueColumn( "data2", Type.BAR, 200, 130, 90, 240, 130, 220), new TypedMultiValueColumn( "data3", Type.SPLINE, 300, 200, 160, 400, 250, 250), new TypedMultiValueColumn( "data4", Type.LINE, 200, 130, 90, 240, 130, 220), new TypedMultiValueColumn( "data5", Type.BAR, 130, 120, 150, 140, 160, 150), new TypedMultiValueColumn( "data6", Type.AREA, 90, 70, 20, 50, 60, 120)); _combinationChartConfig.addGroup("data1", "data2"); return _combinationChartConfig; } }

JSP:

<chart:combination config="<%= chartDisplayContext.getCombinationChartConfig() %>" id="combination" />

The resulting combination chart:

so far, we've looked at static charts. Let's see how we can update charts to reflect real time data.

Real Time Data Example

Charts can reflect static or real time data, such as that fed in from a JSON file that changes periodically. This is made possible via each chart's optional polling interval property. It specifies the time in milliseconds for the chart's data to refresh.  To set the interval polling property, use the setPollingInterval() method.

Java sample data:

public class MyBarChartDisplayContext { public BarChartConfig getBarChartConfig() { BarChartConfig _barChartConfig = new BarChartConfig(); _barChartConfig.addColumns( new MultiValueColumn("data1", 100, 20, 30), new MultiValueColumn("data2", 20, 70, 100)); _barChartConfig.setPollingInterval(2000); return _barChartConfig; } }

The real time data is simulated in the JSP via a promise that resolves when the chart component is loaded:

<chart:bar componentId="polling-interval-bar-chart" config="<%= myBarChartDisplayContext.getBarChartConfig() %>" id="polling-interval-bar-chart" /> <aui:script> Liferay.componentReady('polling-interval-bar-chart').then( function(chart) { chart.data = function() { return Promise.resolve( [ { data: [Math.random() * 100, Math.random() * 100, Math.random() * 100], id: 'data1' }, { data: [Math.random() * 100, Math.random() * 100, Math.random() * 100], id: 'data2' } ] ); }; } ); </aui:script>

The resulting real time bar chart (looped for effect):

Geomap Chart Example

A Geomap Chart lets you visualize data based on geography, given a specified color range–a lighter color representing a lower rank and a darker a higher rank usually. This example ranks the geography based on the location’s name_len value (specified in the geomap’s JSON file). The geomap is based on the length of each location’s name, as specified with the line geomapColor.setValue("name_len");. The setValue() method defines which JSON property is applied to the geomap. The JSON filepath is specified with the setDataHREF() method. The example below uses custom colors.

Java sample data:

public class ChartDisplayContext { public GeomapConfig getGeomapConfig() { GeomapConfig _geomapConfig = new GeomapConfig(); GeomapColor geomapColor = new GeomapColor(); GeomapColorRange geomapColorRange = new GeomapColorRange(); geomapColorRange.setMax("#b2150a"); geomapColorRange.setMin("#ee3e32"); geomapColor.setGeomapColorRange(geomapColorRange); geomapColor.setSelected("#a9615c"); geomapColor.setValue("name_len"); _geomapConfig.setColor(geomapColor); String href = "https://mydomain.com/myservice/geomap.geo.json"; _geomapConfig.setDataHREF(href); return _geomapConfig; } }

The JSP not only points to the data, but also includes styling for the geomap SVG in this case:

<style type="text/css"> .geomap { margin: 10px 0 10px 0; } .geomap svg { width: 100%; height: 500px !important; } </style> <chart:geomap config="<%= chartDisplayContext.getGeomapConfig() %>" id="geomap-custom-colors" />

Resulting geomap:

No longer does that burning question have to keep you up at night: 
Where in the World is Carmen Sandiego? 

Thanks to Liferay's Chart's, now we can use real time data to track her whereabouts on our Geomap:

This blog post  gave a brief overview of how to use charts in Liferay, using simplified code examples. Check out our official documentation on dev.liferay.com for complete examples and information. Thanks for reading!

 

Michael Williams 2018-09-20T18:59:00Z
Categories: CMS, ECM

Installing to SQL Server Using Windows Integrated Authentication

Liferay - Tue, 09/18/2018 - 11:10

This is a quick post on installing Liferay DXP to a SQL Server database.

The Liferay documentation does include the database properties that make up the connection string. But things weren’t as clear when SQL Server is setup for windows integrated authentication, hence this post.

The steps below are presented assuming you are setting up a brand new Liferay installation and pointing it to a SQL Server database using the initial Basic Configuration page that comes up following the first startup of a Liferay server. You could use portal-ext.properties to specify the connection URL provided below.

  1. Extract the Liferay DXP bundle.
  2. Add the jdbc driver jar to the classpath.
    1. Download the Sql Server JDBC Driver and copy the relevant jar to your tomcat/lib/ext/.
  3. Start the server.
  4. When presented with the Basic Configuration screen, click Change to change the database from the hsql that ships with Liferay.
  5. Tweak your connection URL. This is important. Note the integratedSecurity=true.
    1. jdbc:sqlserver://hostname;databaseName=databasename;integratedSecurity=true;
  6. Clear your username and password. You’re using Windows integrated authentication, so you don’t need the database user.
  7. That’s not all though. If you attempt to test your configuration by clicking Finish with the above, you will see an error in the logs.
    This driver is not configured for integrated authentication.
    ...
    ...
    Caused by: java.lang.UnsatisfiedLinkError: no sqljdbc_auth in java.library.path
  8. sqljdbc_auth.dll is the missing piece. You should be able to find that DLL file in an auth subfolder of your JDBC Driver download. Place that DLL file somewhere on your file system. IMPORTANT: THIS IS DEFICIENT. SEE COMMENTS AFTER READING. Then make these two changes to your catalina.bat. 
    1. Add the below line just before rem Execute Java with the applicable properties.
      rem Set the Java library path to help the JVM locate sqljdbc_auth.dll for integrated authentication
      set JAVA_LIB_PATH=c:\app\drivers
    2. After rem Execute Java with the applicable properties, there should be a few calls to run tomcat in different execution scenarios. For each of those calls, insert the JVM parameter, java.library.path, and assign the value of the environment variable %JAVA_LIB_PATH% as its argument. One such call is shown below with the new JVM argument shown in bold.
      %_EXECJAVA% %LOGGING_CONFIG% %LOGGING_MANAGER% %JAVA_OPTS% %CATALINA_OPTS% %DEBUG_OPTS% -D%ENDORSED_PROP%="%JAVA_ENDORSED_DIRS%" -classpath "%CLASSPATH%" -Dcatalina.base="%CATALINA_BASE%" -Dcatalina.home="%CATALINA_HOME%" -Djava.io.tmpdir="%CATALINA_TMPDIR%" -Djava.library.path=%JAVA_LIB_PATH% %MAINCLASS% %CMD_LINE_ARGS% %ACTION%

      Make sure you tweak all the Tomcat execution scenarios.

  9. Restart Liferay.

Javeed Chida 2018-09-18T16:10:00Z
Categories: CMS, ECM

New Liferay Project SDK Installers 3.3.1 GA2 Release

Liferay - Tue, 09/18/2018 - 02:33

We are pleased to announce the second general available release of Liferay Project SDK Installers.

 

New Installers

The installation problems are fixed in the new release, and a few bug fixes.

 

Download

For customers, they can download all of them on the customer studio download page.

 

Upgrade From previous 3 . x

  1. Download updatesite here

  2. Go to Help > Install New Software… > Add…

  3. Select Archive...Browse to the downloaded updatesite

  4. Click OK to close Add repository dialog

  5. Select all features to upgrade then click > Next, again click > Next and accept the license agreements

  6. Finish and restart to complete the upgrade

Feedback

If you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2018-09-18T07:33:00Z
Categories: CMS, ECM

Liferay Faces Released With Liferay Portal 7.1 Support!

Liferay - Mon, 09/17/2018 - 11:35
Liferay Faces Released With Liferay Portal 7.1 Support!

Liferay Faces Portal 3.0.3, Bridge Impl 4.1.2, and Bridge Ext 5.0.3 have been released with support for Liferay Portal 7.1! The release also includes several SPA/SennaJS and PrimeFaces fixes! The release is compatible with Liferay Portal 7.0 and 7.1. Go to liferayfaces.org to get the latest dependency configurations and archetype generate commands.

Liferay Faces Bridge Impl 4.1.2 Release Notes Highlights
  • [FACES-3333] - Add Liferay Portal 7.1 Support
  • [FACES-3327] - PrimeFaces exporter components (p:fileDownload, p:dataExporter, and pe:exporter) cause next navigation to fail

Full Release Notes

Liferay Faces Bridge Ext 5.0.3 Release Notes Highlights
  • [FACES-3333] - Add Liferay Portal 7.1 Support
  • [FACES-3175] - Navigating to page with the same resources but different portlet mode, window state, portlet id, or path via SPA causes certain components to fail
  • [FACES-3328] - PrimeFaces markup remains after SPA navigation in Liferay 7.0+

Full Release Notes

Liferay Faces Portal 3.0.3 Release Notes Highlights
  • [FACES-3339] - portal:inputRichText fails to rerender on Chrome (Liferay Portal 7.0 GA7)

Full Release Notes

Archetypes

Along with these updates, all of our JSF 2.2 compatible archetypes have been updated to the latest appropriate Liferay Faces artifacts and Mojarra 2.2.18.

Known Issues
  • [FACES-3340] - portal:inputRichText fails to render on Ajax request if render="false" on initial request
  • [FACES-3347] - Alloy components log warnings on rerender in Liferay Portal 7.1
  • [FACES-3348] - Selecting non-existent option for autoComplete (with ajax) causes non-Ajax submit on Liferay Portal 7.1
  • [FACES-3342] - JSP JSTL fails in Liferay Portal 7.1.10 GA1 + FP1

Please report any issues with this new release in JIRA and ask any questions in our forums.

Kyle Joseph Stiemann 2018-09-17T16:35:00Z
Categories: CMS, ECM

Recognizing Web Content Mismanagement 

Liferay - Sat, 09/15/2018 - 19:24

I’ve been mulling over an oft-encountered requirement, and I figured it would be alright to jot down my thoughts, even if only to evoke a response. Nothing makes my day like a comment that goes, “Uh, actually, there is a better way to do that.”

Here’s the problem definition. It’s quite typical.

  1. Customer has one site. There will be a lot of content on it.

  2. Customer wants all content to be subject to a workflow.

  3. Here’s the important part. Customer wants different users (or groups of users) to be responsible for different content areas.

    1. When IT content gets edited, the IT content contributors should be the ones participating in the workflow to review/approve it.  When Finance content is edited, the Finance content contributors should be the ones participating in the workflow.

  4. The Problem: How can we address the above when a given asset type (in this case, web content) can only be associated with one workflow? Users who have the content reviewer role will be able to review any and all content. So how, do we accomplish what is in 3(a) above.

Now, this is a no-brainer for anyone who’s really gotten into Liferay’s user and site management. But I’ve noticed that a lot of developers who are heads-down in their work, be it portlet implementation, theme-sculpting or other siloed work, don’t get how this can be addressed usually because they have not found the time to read through the documentation.

It is important for us developers to understand how Liferay’s site and user management work at a fundamental level because a lot of Liferay’s feature sets are built to address scenarios just like this. Now knowing about these fundamentals can result in solutions that often miss key considerations, or worse, reinvent the wheel. And we all know how that can impact our lives.

Let me get to it. Here are three ways to address the above problem. Nothing earth-shattering here.

The First Way

Add smarts to your workflow. We have some fantastic documentation on Liferay workflows that truly demonstrates the sky is the limit. With cleverly defined states and transitions and carefully written Java code running inside <condition> and <assignments> tags, it should be easy to accomplish the above. One approach goes like this:

  • Define a bunch of Regular Roles, one for each team. They don’t have to have any permissions. They would just be marker roles. E.g. Finance Reviewer, IT Reviewer, etc.

  • Assign users these roles as needed.

  • Use categories or tags or custom fields to organize your content in some standard conventional way. E.g. Finance content has the category Finance.

  • In your workflow, write the Java you need to examine the content’s category and then assigns the content item to the corresponding marker role. if category == Finance then role = Finance Reviewer

Here is some documentation (with code) describing this exact scenario.

Pros

  • One site to have it all. One workflow to rule them all.

  • Ultimate developer flexibility. Do whatever you need to. Just code it up.

Cons

  • Ultimate developer flexibility. Do whatever you need to. Just code it up.

This can be a problem. Think about when a developer leaves, and has not really transitioned his skills or knowledge to anyone else on the team. The ramp-up time for someone new to all this can be worrisome.  Add to that the possibility you have about 30-40 teams, hence 30-40 content areas and reviewer roles. Now, maybe the developer followed clever coding conventions so what would have been 500 lines of code got done in 50. But that sort of cleverness is only going to make it harder for the next guy to unravel what is really going on. Add to that any special handling for some of those roles. Anyway, you can use your imagination to come up with scary scenarios.

So, am I belittling the script soup that comes with the workflow en·trée. Far from it. I think it’s delicious, if served in small portions. Hence the next two sections.

The Second Way - A Site Hierarchy

Yes. Define a hierarchy of Sites.

  • The parent site at the top is the main site. The parent site will have all the necessary pages containing a meaningful combination of Web Content Display portlet instances and Asset Publisher portlet instances.

  • The parent site may have some content, or none at all. The purpose of the parent site is to be. And a bit more, to serve as a coming-together ground for content from all the child sites.

  • Each child site reflects a team. Of course, each has its own content repository.

  • Site members of the child site contribute content.

  • Each child site still uses the same workflow that all its sibling sites use. But remember: each child site has its own Site Content Reviewer role. So, only members of the child site are candidates for reviewing content in it.

So, all that requirement noise from The First Way, such as Finance content must be reviewed by a Finance reviewer, gets muted.

  • We have a Finance site with users. Some of them are content reviewers. The workflows just work.

  • And when we do need some smarts added to the workflows via scripts, we add those in. E.g. if the content has a category Confidential, assign it to the specific user in this site having the category Department Head.

Small portions help avoid bloating.

The child sites may have pages with various content portlets on them, but none of the pages are served outside of the child site. So, we have some pretty sweet insulation here. It should be pretty clear by now what the portal architects had in mind.

The Third Way - An Organization Hierarchy

You remember the Users and Organizations section in the Control Panel. There really is an Organizations tab on that screen.

Define a hierarchy of Organizations that reflects the organization structure of the enterprise, more or less.

What is an Organization anyway? It’s basically a way to group users into an organization unit. Departments can translate well to Organizations. See the this awesome wiki article on Organizations, and how they’re different from User Groups.

https://dev.liferay.com/en/discover/portal/-/knowledge_base/7-1/organizations

Here are the salient points.

  • An Organization can be assigned users - that’s the whole point.

  • Each Organization can have an Organization Administrator designated for it. These privileged users can add users to the Organization or edit information for the existing ones, or remove them.

  • When you define an organization, Liferay gives you an option to Create a site for it.

    • So, if you do that for all the organizations in your hierarchy, you get an implicit Site hierarchy (much like The Second Way) wherein the Organization Administrator is, implicitly, the Site Administrator.

And with that, everything we said in The Second Way comes into play. The Third Way is essentially the Second Way with Organizations in the mix.

Now, I’ve noticed a few points of interest with an Organization hierarchy owing to the implicitness of the site associated with it (i.e. if one was chosen to be created). But, I’m not going to bring any of that up in this post because I don’t think they pose practical problems. I’m hoping someone will call out what they believe are the real problems, if any.

After all, this, like everything else, is just an elaborate exercise to bring us the words, “Uh, actually, there is a better way to do that.”

Javeed Chida 2018-09-16T00:24:00Z
Categories: CMS, ECM
Syndicate content