Assistance with Open Source adoption

Open Source News

How Page Fragments help the new web experience in Liferay Portal 7.1

Liferay - Mon, 09/10/2018 - 03:50

This is the second post explaining the new Web Experience functionalities released in version 7.1 of Liferay Portal.  As presented in the previous post, in order to empower business users, it is necessary to have Page Fragment collections available.


But, what are they and what is their use?


Page Fragments are “reusable page parts” created by web developers to be used by non-technical users. Page Fragments are designed and conceived to be assembled by marketers or business users to build Content Pages. To start creating Page Fragments, we will be required to add a Page Fragment collection (we will look at the tools available in a moment), but first...


How are Page Fragments developed?


Page Fragments are implemented by web developers using  HTML, CSS and JavaScript. The markup of the Page Fragment is in regular HTML, but it can contain some special tags. Effectively, two main types of lfr tags add functionality to Page Fragments:


  • lfr-editable” is used to make a fragment section editable. This can be extensive to plain “text”, “image” or “rich-text”, depending on which of the 3 “type” options is used. Rich text provides an WYSIWYG editor for editing before publication.


<lfr-editable id=”unique-id” type="text"> This is editable text! </lfr-editable> <lfr-editable id="unique-id" type="image"> <img src="..."> </lfr-editable> <lfr-editable id=”unique-id” type="rich-text"> <h1>This is editable rich text!</h1> <p>It may contain almost any HTML elements</p> </lfr-editable>


  • “lfr-widget-<>” are a group of tags used to embed widgets within Page Fragments and, therefore, to add dynamic behaviour. The corresponding widget name should be added in <>. For instance, “nav” for Navigation Menu widget, “web-content” for Web Content Display or “form” for Forms widget.


<div class=”nav-widget”> <lfr-widget-nav> </lfr-widget-nav> </div>


If you would like to get more detail on how to build Page Fragments, you can check  the Liferay 7.1 Documentation section on Developing Page Fragments.


What tools are there for the creation and management of Page Fragments?


Page Fragment collections are stored and organized from a new administration application.  There we will find access to the Page Fragment editor, where front-end developers will be able to create new collections or edit existing ones. Importing/ exporting collections is also an option.



Interesting additional functionality is the “View Usages” option found in the kebab menu of the Page Fragments in use. It allows to track the usage of a given Page Fragment as in where and when it has been used, namely, in which Pages, Page Templates and Display Pages it has been used and which version each of them is using. Page Fragments are designed to allow business users to do inline editing, which means web developers are not expected make changes to the actual content, but some other edits are likely to be necessary-  adjust size of image, position of text…-. To provide support to these scenarios, by default, changes are not propagated to Page Fragments in use, but the web developer is given a tool for bulk change propagation that will apply to selected usages within the list.


To finalize, remind you that should you be interested in leveraging on an example, Themes with an existing set of Page Fragments are downloadable for free from the Marketplace. Both Fjord and Westeros Bank have now been updated for 7.1


Also, remember that you can check the “Building Engaging Websites” free course available on Liferay University.

Ianire Cobeaga 2018-09-10T08:50:00Z
Categories: CMS, ECM

Extending OSGi Components

Liferay - Fri, 09/07/2018 - 20:46

A few months ago, in the Community Chat, one of our community members raised the question, "Why does Liferay prefer public pages over private pages?" For example, if you select the "Go to Site" option, if there are both private pages and public pages, Liferay sends you to the public pages.

Unfortunately, I don't have an answer to that question. However, through some experimentation, I am able to help answer a closely related question: "Is it possible to get Liferay to prefer private pages over public pages?"

Find an Extension Point

Before you can actually do any customization, you need to find out what is responsible for the behavior that you are seeing. Once you find that, it'll then become apparent what extension points are available to bring about the change you want to see in the product.

So to start off, you need to determine what is responsible for generating the URL for the "Go to Site" option.

Usually, the first thing to do is to run a search against your favorite search engine to see if anyone has explained how it works, or at least tried a similar customization before. If you're extremely fortunate, you'll find a proof of concept, or at least a vague outline of what you need to do, which will cut down on the amount of time it takes for you to implement your customization. If you're very fortunate, you'll find someone talking about how the overall design of the feature, which will give you some hints on where you should look.

Sadly, in most cases, your search will probably come up empty. That's what would have happened in this case.

If you have no idea what to do, the next thing you should try is to ask someone if they have any ideas on how to implement your customization. For example, you might post on the Liferay community forums or you might try asking a question on Stackoverflow. If you have a Liferay expert nearby, you can ask them to see if they have any ideas on where to look.

If there are no Liferay experts, or if you believe yourself to be one, the next step is to go ahead and find it yourself. To do so, you will need a copy of the Liferay Portal source (shallow clones are also adequate for this purpose, which is beneficial because a full clone of Liferay's repository is on the order of 10 GB now), and then search for what's responsible for the behavior that you are seeing within that source code.

Find the Language Key

Since what we're changing has a GUI element with text, it necessarily has a language key responsible for displaying that text. This means that if you search through all the files in Liferay, you should be able to find something that reads, "Go to Site". If you're using a different language, you'll need to search through files instead.

git ls-files | grep -F | xargs grep -F "Go to Site"

In this case, searching for "Go to Site" will lead you to the module's language files, which tell us that the language key that reads "Go to Site" corresponds to the key go-to-site.

Find the Frontend Code

It's usually safe to assume that the language file where the key is declared is also the module that uses it, which means we can restrict our search to just the module where the lives.

git ls-files modules/apps/web-experience/product-navigation/product-navigation-site-administration | \ xargs grep -Fl go-to-site

This will give us exactly one result.

Replace the Original JSP

If you were to choose to modify this JSP, a natural approach would be to follow the tutorial on JSP Overrides Using OSGi Fragments, and then call it a day.

With that in mind, a simple way to get the behavior we want is to let Liferay generate the URL, and then do a straight string replacement changing /web to /group (or /user if it's a user personal site) if we know that the site has private pages.

<%@page import="com.liferay.portal.kernel.util.StringUtil" %> <%@page import="com.liferay.portal.util.PropsValues" %> <% Group goToSiteGroup = siteAdministrationPanelCategoryDisplayContext.getGroup(); String goToSiteURL = siteAdministrationPanelCategoryDisplayContext.getGroupURL(); if (goToSiteGroup.getPrivateLayoutsPageCount() > 0) { goToSiteURL = StringUtil.replaceFirst( goToSiteURL, PropsValues.LAYOUT_FRIENDLY_URL_PUBLIC_SERVLET_MAPPING, goToSiteGroup.isUser() ? PropsValues.LAYOUT_FRIENDLY_URL_PRIVATE_USER_SERVLET_MAPPING : PropsValues.LAYOUT_FRIENDLY_URL_PRIVATE_GROUP_SERVLET_MAPPING); } %> <aui:a cssClass="goto-link list-group-heading" href="<%= goToSiteURL %>" label="go-to-site" /> Copy More Original Code

Now, let's imagine that we want to also worry about the go-to-other-site site selector and update it to provide the URLs we wanted. Investigating the site selector investigation would lead you to item selectors, which would lead you to the MySitesItemSelectorView and RecentSitesItemSelectorView, which would take you to view_sites.jsp.

We can see that there are three instances where it generates the URL by calling GroupURLProvider.getGroup directly: line 83, line 111, and line 207. We would simply follow the same pattern as before in each of these three instances, and we'd be finished with our customization.

If additional JSP changes would be needed, this process of adding JSP overrides and replacement bundles would continue.

Extend the Original Component

While we were lucky in this case and found that we could fix everything just by modifying just two JSPs, we won't always be this lucky. Therefore, let's take the opportunity to understand if there's a different way to solve the problem.

Following the path down to all the different things that this JSP calls leads us to a few options for which extension point we can potentially override in order to get the behavior we want.

First, we'll want to ask: is the package the class lives in exported? If it is not, we'll need to either rebuild the manifest of the original module to provide the package in Export-Package, or we will need to add our classes directly to an updated version of the original module. The latter is far less complicated, but the module would live in osgi/marketplace/override, which is not monitored for changes by default (see the portal property).

From there, you'll want to ask the question: what kind of Java class is it? In particular, you would ask dependency management questions. Is an instance of it managed via Spring? Is it retrieved via a static method? Is there a factory we can replace? Is it directly instantiated each time it's needed?

Once you know how it's instantiated, the next question is how you can change its value where it's used. If there's a dependency management framework involved, we make the framework aware of our new class. For a direct instantiation, then depending on the Liferay team that maintains the component, you might see these types of variables injected as request attributes (which you would handle by injecting your own class for the request attribute), or you might see this instantiated directly in the JSP.

Extend a Component in a Public Package

Let's start with SiteAdministrationPanelCategoryDisplayContext. Digging around in the JSP, you discover that, unfortunately, it's just a regular Java object and the constructor is called in site_administration_body.jsp. Since this is just a plain old Java object that we instantiate from the JSP (it's not a managed dependency), which makes it a bad choice for an extension point unless you want to replace the class definition.

What about GroupURLProvider? Well, it turns out that GroupURLProvider is an OSGi component, which means its lifecycle is managed by OSGi. This means that we need to make OSGi aware of our customization, and then replace the existing component with our component, which will provide a different implementation of the getGroupURL method which prefers private URLs over public URLs.

From a "can I extend this in a different bundle, or will I need to replace the existing bundle" perspective, we're fortunate (the class is inside of a -api module, where everything is exported), and we can simply extend and override the class from a different bundle. The steps are otherwise identical, but it's nice knowing that you're modifying a known extension point.

Next, we declare our extension as an OSGi component.

@Component( immediate = true, service = GroupURLProvider.class ) public class PreferPrivatePagesGroupURLProvider extends GroupURLProvider { }

If you're the type of person who likes to sanity check after each small change by deploying your update, there's a wrinkle you will run into right here.

If you deploy this component and then blacklist the original component by following the instructions on Blacklisting OSGi Modules and Components, you'll run into a NullPointerException. This is because OSGi doesn't fulfill any of the references on the parent class, so when it calls methods on the original GroupURLProvider, none of the references that the code thought would be satisfied actually are satisfied, and it just calls methods on null objects.

You can address part of the problem by using bnd to analyze the parent class for protected methods and protected fields by adding -dsannotations-options: inherit.

-dsannotations-options: inherit

Of course, setting field values using methods is uncommon, and the things Liferay likes to attach @Reference to are generally private variables, and so almost everything will still be null even after this is added. In order to work around that limitation, you'll need to use Java reflection. In a bit of an ironic twist, the convenience utility for replacing the private fields via reflection will also be a private method.

@Reference(unbind = "unsetHttp") protected void setHttp(Http http) throws Exception { _setSuperClassField("_http", http); } @Reference(unbind = "unsetPortal") protected void setPortal(Portal portal) throws Exception { _setSuperClassField("_portal", portal); } protected void unsetHttp(Http http) throws Exception { _setSuperClassField("_http", null); } protected void unsetPortal(Portal portal) throws Exception { _setSuperClassField("_portal", null); } private void _setSuperClassField(String name, Object value) throws Exception { Field field = ReflectionUtil.getDeclaredField( GroupURLProvider.class, name); field.set(this, value); } Implement the New Business Logic

Now that we've extended the logic, what we'll want to do is steal all of the logic from the original method (GroupURLProvider.getGroupURL), and then flip the order on public pages and private pages, so that private pages are checked first.

@Override protected String getGroupURL( Group group, PortletRequest portletRequest, boolean includeStagingGroup) { ThemeDisplay themeDisplay = (ThemeDisplay)portletRequest.getAttribute( WebKeys.THEME_DISPLAY); // Customization START // Usually Liferay passes false and then true. We'll change that to // instead pass true and then false, which will result in the Go to Site // preferring private pages over public pages whenever both are present. String groupDisplayURL = group.getDisplayURL(themeDisplay, true); if (Validator.isNotNull(groupDisplayURL)) { return _http.removeParameter(groupDisplayURL, "p_p_id"); } groupDisplayURL = group.getDisplayURL(themeDisplay, false); if (Validator.isNotNull(groupDisplayURL)) { return _http.removeParameter(groupDisplayURL, "p_p_id"); } // Customization END if (includeStagingGroup && group.hasStagingGroup()) { try { if (GroupPermissionUtil.contains( themeDisplay.getPermissionChecker(), group, ActionKeys.VIEW_STAGING)) { return getGroupURL(group.getStagingGroup(), portletRequest); } } catch (PortalException pe) { _log.error( "Unable to check permission on group " + group.getGroupId(), pe); } } return getGroupAdministrationURL(group, portletRequest); }

Notice that in copying the original logic, we need a reference to _http. We can either replace that with HttpUtil, or we can store our own private copy of _http. So that the code looks as close to the original as possible, we'll store our own private copy of _http.

@Reference(unbind = "unsetHttp") protected void setHttp(Http http) throws Exception { _http = http; _setSuperClassField("_http", http); } protected void unsetHttp(Http http) throws Exception { _http = null; _setSuperClassField("_http", null); } Manage a Component's Lifecycle

At this point, all we have to do is disable the old component, which we can do by following the instructions on Blacklisting OSGi Modules and Components.

However, what if we wanted to do that at the code level rather than at the configuration level? Maybe we want to simply deploy our bundle and have everything just work without requiring any manual setup.

Disable the Original Component on Activate

First, you need to know that the component exists. If you attempt to disable the component before it exists, that's not going to do anything for you. We know it exists, once a @Reference is satisfied. However, because we're going to disable it immediately upon realizing it exists, we want to make it optional. This leads us to the following rough outline, where we call _deactivateExistingComponent once we have our reference satisfied.

@Reference( cardinality = ReferenceCardinality.OPTIONAL, policy = ReferencePolicy.DYNAMIC, policyOption = ReferencePolicyOption.GREEDY, target = "(", unbind = "unsetGroupURLProvider" ) protected void setGroupURLProvider(GroupURLProvider groupURLProvider) throws Exception { _deactivateExistingComponent(); } protected void unsetGroupURLProvider(GroupURLProvider groupURLProvider) { }

Next, you need to be able to access a ServiceComponentRuntime, which provides a disableComponent method. We can get access to this with another @Reference. If we exported this package, we would probably want this to be set using a method for reasons we experienced earlier that required us to implement our _setSuperClassField method, but for now, we'll be content with leaving it as private.

@Reference private ServiceComponentRuntime _serviceComponentRuntime;

Finally, in order to call ServiceComponentRuntime.disableComponent, you need to generate a ComponentDescriptionDTO, which coincidentally needs just a name and the bundle that holds the component. In order to get the Bundle, you need to have the BundleContext.

@Activate public void activate( ComponentContext componentContext, BundleContext bundleContext, Map<String, Object> config) throws Exception { _bundleContext = bundleContext; _deactivateExistingComponent(); } private void _deactivateExistingComponent() throws Exception { if (_bundleContext == null) { return; } String componentName = GroupURLProvider.class.getName(); Collection<ServiceReference<GroupURLProvider>> serviceReferences = _bundleContext.getServiceReferences( GroupURLProvider.class, "(" + componentName + ")"); for (ServiceReference serviceReference : serviceReferences) { Bundle bundle = serviceReference.getBundle(); ComponentDescriptionDTO description = _serviceComponentRuntime.getComponentDescriptionDTO( bundle, componentName); _serviceComponentRuntime.disableComponent(description); } } Enable the Original Component on Deactivate

If we want to be a good OSGi citizen, we also want to make sure that the original component is still available whenever we stop or undeploy our module. This is really the same thing in reverse.

@Deactivate public void deactivate() throws Exception { _activateExistingComponent(); _bundleContext = null; } private void _activateExistingComponent() throws Exception { if (_bundleContext == null) { return; } String componentName = GroupURLProvider.class.getName(); Collection<ServiceReference<GroupURLProvider>> serviceReferences = _bundleContext.getServiceReferences( GroupURLProvider.class, "(" + componentName + ")"); for (ServiceReference serviceReference : serviceReferences) { Bundle bundle = serviceReference.getBundle(); ComponentDescriptionDTO description = _serviceComponentRuntime.getComponentDescriptionDTO( bundle, componentName); _serviceComponentRuntime.enableComponent(description); } }

Once we deploy our change, we find that a few other components in Liferay are using the GroupURLProvider provided by OSGi. Among these is the go-to-other-site site selector, which would have required another bundle replacement with the previous approach.

Minhchau Dang 2018-09-08T01:46:00Z
Categories: CMS, ECM

Gradle: compile vs compileOnly vs compileInclude

Liferay - Fri, 09/07/2018 - 12:55

By request, a blog to explain compile vs compileOnly vs compileInclude...

First it is important to understand that these are actually names for various configurations used during the build process, but specifically  when it comes to the dependency management. In Maven, these are implemented as scopes.

Each time one of these three types are listed in your dependencies {} section, you are adding the identified dependency to the configuration.

The three configurations all add a dependency to the compile phase of the build, when your java code is being compiled into bytecode.

The real difference these configurations have is on the manifest of the jar when it is built.


So compile is the one that is easiest to understand. You are declaring a dependency that your java code needs to compile cleanly.

As a best practice, you only want to list compile dependencies on those libraries that you really need to compile your code. For example, you might be needing group: 'org.apache.poi', name: 'poi-ooxml', version: '4.0.0' for reading and writing Excel spreadsheets, but you wouldn't want to spin out to and then declare a compile dependency on everything POI needs.  As transitive dependencies, Gradle will handle those for you.

When the compile occurs, this dependency will be included in the classpath for javac to compile your java source file.

Additionally, when it comes time to build the jar, packages that you use in your java code from POI will be added as Import-Package manifest entries.

It is this addition which will result in the "Unresolved Reference" error about the missing package if it is not available from some other module in the OSGi container.

For those with a Maven background, the compile configuration is the same as Maven's compile scope.


The compileOnly configuration is used to itemize a dependency that you need to compile your code, same as compile above.

The difference is that packages your java code use from a compileOnly dependency will not be listed as Import-Package manifest entries.

The common example for using compileOnly typically resolves around use of annotations. I like to use FindBugs on my code (don't laugh, it has saved my bacon a few times and I feel I deliver better code when I follow its suggestions). Sometimes, however, FindBugs gets a false positive result, something it thinks is a bug but I know it is exactly how I need it to be.

So the normal solution here is to add the @SuppressFBWarninsg annotation on the method; here's one I used recently:

@SuppressFBWarnings(value = "NP_NULL_PARAM_DEREF", justification = "Code allocates always before call.") public void emit(K key) { ... }

FindBugs was complaining that I didn't check key for null, but it is actually emitting within the processing of a Map entry, so the key can never be null. Rather than add the null check, I added the annotation.

To use the annotation, I of course need to include the dependency:

compileOnly ''

I used compileOnly in this case because I only need the annotation for the compile itself; the compile will strip out the annotation info from the bytecode because it is not a runtime annotation, so I do not need this dependency after the compile is done.

And I definitely don't want it showing up in the Import-Package manifest entry.

In OSGi, we will also tend to use compileOnly for the org.osgi.core and osgi.cmpn dependencies, not because we don't need them at runtime, but because we know that within an OSGi container these packages will always be provided (so the Manifest does not need to enforce it) plus we might want to use our jar outside of an OSGi container.

For those with a Maven background, the compileOnly configuration is similar to Maven's provided scope.


compileInclude is the last configuration to cover. Like compile and compileOnly,  this configuration will include the dependency in the compile classpath.

The compileInclude configuration was actually introduced by Liferay and is included in Liferay's Gradle plugins.

The compileInclude configuration replaces the manual steps from my OSGi Depencencies blog post, option #4, including the jars in the bundle.

In fact, everything my blog talks about with adding the Bundle-ClassPath directive and the -includeresource instruction to the bnd.bnd file, well the compileInclude does this. Where compileInclude shines, though, is that it will also include some of the transitive dependencies into the module as well.

Note how I said that some of the transitive dependencies are included? I haven't quite figured out how it decides which transitive dependencies to include, but I do know it is not always 100% correct. I've had cases where it missed a particular transitive dependency. I do know it will not include optional dependencies and that may have been the cause in those cases. To fix it though, I would just add a compileInclude configuration line for the missing transitive dependency.

You can disable the transitive dependency inclusion by adding a flag at the end of the declaration. For example, if I only wanted poi-ooxml but for some reason didn't want it's transitive dependencies, I could use the following:

compileInclude group: 'org.apache.poi', name: 'poi-ooxml', version: '4.0.0', transitive:false

It's then up to you to include or exclude the transitive dependencies, but at least you won't need to manually update the bnd.bnd file.

If you're getting the impression that compileInclude will mostly work but may make some bad choices (including some you don't want and excluding some that you need), you would be correct. It will never offer you the type of precise control you can have by using Bundle-ClassPath and -includeresource. It just happens to be a lot less work.

For those who use Maven, I'm sorry but you're kind of out of luck as there is no corresponding Maven scope for this one.


I hope this clarifies whatever confusion you might have with these three configurations.

If you need recommendations of what configuration to use when, I guess I would offer the following:

  • For packages that you know will be available from the OSGi container, use the compile configuration. This includes your custom code from other modules. all com.liferay code, etc.
  • For packages that you do not want from the OSGi container or don't think will be provided by the OSGi container, use the compileInclude configuration. This is basically all of those third party libraries that you won't be pushing as modules to the container.
  • For all others, use the compileOnly configuration.


David H Nebinger 2018-09-07T17:55:00Z
Categories: CMS, ECM

ML taking stage at API World and happy hour in Mountain View

SnapLogic - Fri, 09/07/2018 - 12:47

There’s going to be a lot happening next week which I’m really excited to share. The common theme is helping our customers overcome their obstacles, which is something that I, and the rest of the team at SnapLogic, are passionate about. First, I will be at API World in San Jose, California on Monday, September[...] Read the full article here.

The post ML taking stage at API World and happy hour in Mountain View appeared first on SnapLogic.

Categories: ETL

Metadata Management 101: What, Why and How

Talend - Fri, 09/07/2018 - 10:26

Metadata Management has slowly become one of the most important practices for a successful digital initiative strategy. With the rise of distributed architectures such as Big Data and Cloud which can create siloed systems and data, metadata management is now vital for managing the information assets in an organization. The internet has a lot of literature around this concept and readers can easily get confused with the terminology. In this blog, I wanted to give the users a brief overview of metadata management in plain English.

What does metadata management do?

Let’s get started with the basics. Though there are many definitions out there for Metadata Management, but the core functionality is enabling a business user to search and identify the information on the key attributes in web-baseded user interface.

An example of a searchable key attribute could be Customer ID or a member name. With a proper metadata management system in place, business users will be able to understand where the data for that attribute is coming from and how was the data in the attribute calculated. They will be able to visualize which enterprise systems in the organization the attribute being used in (Lineage) and will be able to understand the impact of changing something (Impact Analysis) to the attribute such as the length of the attribute to other systems.

Technical users also have needs for metadata management. By combining business metadata with technical metadata, a technical user will also be able to find out which ETL job or database process is used to load data into the attribute. Operational metadata such as control tables in a data warehouse load can also be combined to this integrated metadata model. This is powerful information for an end user at to have at their fingertips. The end result of metadata management can be in the form of another ‘database’ of the metadata of key attributes of the company. The industry term for such a database would be called a Data Catalog, or a glossary or Data inventory.

How does metadata management work?

Metadata Management is only one of the initiatives of a holistic Data Governance program but this is the only initiative which deals with “Metadata”. Other initiatives such as Master Data Management (MDM) or Data Quality (DQ) deal with the actual “data” stored in various systems. Metadata management integrates metadata stores at the enterprise level.

Tools like Talend Metadata Manager provide an automated way to parse and load different types of metadata. The tool also enables to build an enterprise model based on the metadata generated from different systems such as your data warehouse, data integration tools, data modelling tools, etc.

Users will be able to resolve conflicts based on for example attribute names and types. You will also be able to create custom metadata types to “stitch” metadata between two systems. A completely built metadata management model would give a 360-degree view on how different systems in your organization are connected together.  This model can be a starting point to any new Data Governance initiative. Data modelers will have one place now to look for a specific attribute and use it in their own data model. This model is also the foundation of the ‘database’ that we talked about in the earlier section. Just like any other Data Governance initiatives, as the metadata in individual systems change, the model needs to be updated following a SDLC methodology which includes versioning, workflows and approvals. Access to the metadata model should also be managed by creating roles, privileges and policies.

Why do we need to manage metadata?

The basic answer is, trust. If metadata is not managed during the system lifecycle, silos of inconsistent metadata will be created in the organization that does not meet any teams full needs and provide conflicting information. Users would not know how much they need to trust the data as they is no metadata to indicate how and when the data got to the system and what business rules were applied.

Costs also need to be considered. Without effectively managing metadata, each development project would have to go through the effort of defining data requirements increasing costs and decreasing efficiency. Users are presented with many tools and technologies creating redundancy and excess costs and do not provide the full value of the investment as the data they are looking for is not available. The data definitions are duplicated across multiple systems driving higher storage costs.

As business becomes mature and more and more systems are added, they need to consider how the metadata (and not just the data) needs to be governed. Managing metadata provides clear benefits to the business and technical users and the organization as a whole. I hope this has been a useful intro to all the very basics of metadata management. Until next time!

The post Metadata Management 101: What, Why and How appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Automate and simplify your inventory management process

VTiger - Fri, 09/07/2018 - 00:29
If inventory management is an essential part of your business, we have good news for you! We’ve built a new Inventory Add-on that helps optimize your inventory management process and warehouse operations. The add-on streamlines your order management by allowing you to create delivery notes, receipt notes, credit notes, and bills within Vtiger. This means […]
Categories: CRM

[Step-by-Step] How to Load Salesforce Data Into Snowflake in Minutes

Talend - Thu, 09/06/2018 - 16:47

As cloud technologies move into the mainstream at an unprecedented rate, organizations are augmenting existing, on-premise data centers with cloud technologies and solutions—or even replacing on-premise data storage and applications altogether. But in almost any modern environment, data will be gathered from multiple sources at a variety of physical locations (public cloud, private cloud, or on-premise). Talend Cloud is Talend’s Integration Platform-as-a-Service (IPaaS), a modern cloud and on-premises data and application integration platform and is particularly performant for cloud-to-cloud integration projects.

To explore the capabilities and features of Talend Cloud, anyone can start 30-day free trial. Several sample jobs are available for import as part of the Talend Cloud trial to get you familiar with the Talend IPaaS solution. The video below walks you through two sample jobs to load data from Salesforce into a Snowflake database.

To get started, you obviously need to be a user (or trial user) of Talend Cloud, Snowflake cloud data warehouse and Salesforce. Then, there’s a simple 2-step process to migrate Salesforce data to Snowflake, using Talend cloud. The first job will use a snowflake connection to create a user-defined database with three tables in Snowflake. The second job will then migrate these three tables from to the snowflake cloud warehouse.

The full step-by-step process is available here with attached screenshots. Want to try  Talend Cloud? Start your trial today!

The post [Step-by-Step] How to Load Salesforce Data Into Snowflake in Minutes appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Can We Get a Little Help Over Here?

Liferay - Thu, 09/06/2018 - 13:03

One of the benefits that you get from an enterprise-class JEE application server is a centralized administration console.

Rather than needing to manage nodes individually like you would with Tomcat, the JEE admin console can work on the whole cluster at one time.

But, with Liferay 7 CE and Liferay 7 DXP and the deployment of OSGi bundle jars, portlet/theme wars and Liferay lpkg files, the JEE admin console cannot be used to push your shiny new module or even a theme war file because it won't know to drop these files in the Liferay deploy folder.

Enter the Deployment Helper

So Liferay created this Maven and Gradle plugin called the Deployment Helper to give you a hand here.

Using the Deployment Helper, the last part of your build is the generation of a war file that contains the bundle jars and theme wars, but is a single war artifact.

This artifact can be deployed to all cluster nodes using the centralized admin console.

Adding the Deployment Helper to the Build

To add the Deployment Helper to your Gradle-based build:

To add the Deployment Helper to your Maven-based build:

While both pages offer the technical details, they are still awfully terse when it comes to usage.

Gradle Deployment Helper

Basically for Gradle you get a new task, the buildDeploymentHelper task. You can execute gradlew buildDeploymentHelper on the command line after including the plugin and you'll get a war file, but probably one that you'll want to configure.

The plugin is supposed to pull in all jar files for you, so that will cover all of your modules. You'll want to update the deploymentFiles to include your theme wars and any of the artifacts you might be pulling in from the legacy SDK.

In the example below, my Liferay Gradle Workspace has the following build.gradle file:

buildscript { dependencies { classpath group: "com.liferay", name: "com.liferay.gradle.plugins", version: "3.12.48" classpath group: "com.liferay", name: "com.liferay.gradle.plugins.deployment.helper", version: "1.0.3" } repositories { maven { url "" } } } apply plugin: "com.liferay.deployment.helper" buildDeploymentHelper { deploymentFiles = fileTree('modules'){include '**/build/libs/*.jar'} +   fileTree('themes'){include '**/build/libs/*.war'} }

This will include all wars from the theme folder and all module jars from the modules folder. Since I'm being specific on the paths for files to include, any wars and jars that might happen to be polluting the directories will be avoided.

Maven Deployment Helper

The Maven Deployment Helper has a similar task, but of course you're going to use the pom to configure and you have a different command line.

The Maven equivalent of the Gradle config would be something along the lines of:

<build> <plugins> ... <plugin> <groupId>com.liferay</groupId> <artifactId>com.liferay.deployment.helper</artifactId> <version>1.0.4</version> <configuration> <deploymentFileNames> modules/my-mod-a/build/libs/my-mod-a-1.0.0.jar, modules/my-mod-b/build/libs/my-mod-b-1.0.0.jar, ..., themes/my-theme/build/libs/my-theme-1.0.0.war </deploymentFileNames> </configuration> </plugin> ... </plugins> </build>

Unfortunately you can't do some cool wildcard magic here, you're going to have to list out the ones to include.

Deployment Helper War

So you've built a war now using the Deployment Helper, but what does it contain? Here's a sample from one of my projects:

Basically you get a single class, the com.liferay.deployment.helper.servlet.DeploymentHelperContextListener class.

You also get a web.xml for the war.

And finally, you get all of the files that you listed for the deployment helper task.


You can find the source for DeploymentHelperContextListener here, but I'll give you a quick summary.

So we have two key methods, copy() and contextInitialized().

The copy() method does, well, the copying of data from the input stream (the artifact to be deployed) to the output stream (the target file in the Liferay deploy folder). Nothing fancy.

The contextInitialized() method is the implementation from the ContextListener interface and will be invoked when the application container has constructed the war's ServletContext.

If you scan the method, you can see how the parameters that are options to the plugins will eventually get to us via context parameters.

It then loops through the list of deployment filenames, and for each one in the list it will create the target file in the deploy directory (the Liferay deploy folder), and it will use the copy() method to copy the data out to the filesystem.

Lastly it will invoke the DeployManagerUtil.undeploy() on the current servlet context (itself) to attempt to remove the deployment helper permanently. Note that per the listed restrictions, this may not actually undeploy the Deployment Helper war.


The web.xml file is pretty boring:

<?xml version="1.0"?> <web-app xmlns=""   xmlns:xsi="" version="3.0"   xsi:schemaLocation=""> <context-param> <description>A comma delimited list of files in the WAR that should be deployed to the   deployment-path. The paths must be absolute paths within the WAR.</description> <param-name>deployment-files</param-name> <param-value>/WEB-INF/micro.maintainance.outdated.task-1.0.0.jar, /WEB-INF/, ...   </param-value> </context-param> <context-param> <description>The absolute path to the Liferay deploy folder on the target system.</description> <param-name>deployment-path</param-name> <param-value></param-value> </context-param> <listener> <listener-class>com.liferay.deployment.helper.servlet.DeploymentHelperContextListener</listener-class> </listener> </web-app>

Each of the files that are supposed to be deployed, they are listed as a parameter for the context listener.

The rest of the war is just the files themselves.

An Important Limitation

So there is one important limitation you should be aware of...

ContextListeners are invoked every time the container is restarted or the war is (re)deployed.

If your deployment helper cannot undeploy itself, every time you restart the container, all of your artifacts in the Deployment Helper war are going to be processed again.

So, as part of your deployment process, you should verify and ensure that the Deployment Helper has been undeployed, whether it can remove itself or whether you must manually undeploy from the centralized console.


So now, using the Deployment Helper, you can create a war file that contains files to deploy to Liferay: the module jars, the portlet wars, the theme wars and yes, you could even deploy Liferay .lpkg files, .lar files and even your licence xml file (for DXP).

You can create the Deployment Helper war file directly out of your build tools. If you are using CI, you can use the war as one of your tracked artifacts.

Your operations folks will be happy in that they can return to using the centralized admin console to deploy a war to all of the nodes, they won't need to copy everything to the target server's deploy folders manually.

They may gripe a little about deploying the bundle followed by an undeploy when the war has started, but you just need to remind them of the pain that you're saving them from the older, manual development process.


David H Nebinger 2018-09-06T18:03:00Z
Categories: CMS, ECM

Moving to Microsoft Azure Cloud? 8 tips for evaluating data integration platforms

SnapLogic - Thu, 09/06/2018 - 12:59

As organizations increasingly move their data and operations to the Microsoft Azure Cloud, they must migrate data from old systems stored on-premises. Unfortunately, IT can’t always meet data integration deadlines by writing custom code and legacy integration technologies drive up the time and costs to migrate. The key to success is finding a data integration[...] Read the full article here.

The post Moving to Microsoft Azure Cloud? 8 tips for evaluating data integration platforms appeared first on SnapLogic.

Categories: ETL

Today's Top Ten: Ten Reasons to Avoid Sessions

Liferay - Wed, 09/05/2018 - 19:31

From the home office outside of Charleston, South Carolina, here are the top ten reasons to avoid Portlet and HTTP session storage:

Number 10 - With sticky sessions, traffic originating from a web address will be sent to the same node, even if other nodes in the cluster have more capacity. You lose some of the advantages of a cluster because you cannot direct traffic to nodes with available capacity, instead your users are stuck on the node they first landed on and hopefully someone doesn't start a heavyweight task on that node...

Number 9 - With sticky sessions, if the node fails, users on that node lose whatever was in the session store. The cluster will flip them over to an available node, but it cannot magically restore the session data.

Number 8 - If you use session replication to protect against node outage, the inter-node network traffic increases to pass around session data that most of the time is not necessary. You need it in case of node failure, but when was your last node failure? How much network and CPU capacity are you sacrificing for this "just in case" standby?

Number 7 - Neither sticky session data nor even session replication will help you in case of disaster recovery. If your primary datacenter goes off the net and your users are switched over to the DR site, you often have backup and files syncing to DR for recovery, but who sync's session data?

Number 6 - Session storage is a memory hit. Session data is typically kept in memory, so if you have 5k of session data per user but you get slashdotted and have 100k users hit your system, that's like a 500m memory hit you're going to take for session storage. If your numbers are bigger, well you can do the math. If you have replication, all of that data is replicated and retains on all nodes.

Number 5 - With sticky sessions, to upgrade a node you pull it out of the cluster, but now you need to wait for either folks to log out (ending their session), or wait for the sessions to expire (typically 30 minutes, but if you have an active user on that node, you could wait for a long time), or you can just kill the session and impact the user experience. All for what should be a simple process of taking the node out of circulation, updating the node, then getting it back into circulation.

Number 4 - Having a node out of circulation for a long time for sessions is a risk that your cluster will not be up to handling capacity, or it will be handling the capacity with fewer resources. In a two node cluster, if you have one cluster out of circulation in preparation for an update but the second fails, you have no active servers available to serve traffic.

Number 3 - In a massive cluster, session replication is not practical. The nodes will spend more time trying to keep each other's sessions in sync than they will serving real traffic.

Number 2 - Session timeouts discard session data, whether clients want that or not. If I put 3 items in my shopping cart but step away to play with my kids, when I come back in and log back in, those 3 items should still be there. Much of the data we would otherwise stuff into a session, a user can have a perspective that this inflight data should come back when they log back in.

And the number one reason to avoid sessions:

Number 1 - They are a hack!

Okay, so this was a little corny, I know. But it is also accurate and important.

All of these issues are things you will face deploying Liferay, your portlets, or really any web application. If you avoid session storage at all costs, you avoid all of these problems.

But I get it. As a developer, session storage is just so darn attractive and easy and alluring. Don't know where to put it but need it in the future? Session storage to the rescue!

But really, session storage is like drugs. If you start using them, you're going to get hooked. Before you know it you're going to be abusing sessions and your applications are going to suffer as a result. The really are better off avoided altogether.

There's a reason that shopping carts don't store themselves in sessions; they were too problematic. Your data is probably a lot more important than what kind of toothpaste I have in my cart, so if persistence is good enough for them, it is good enough for your data too.

And honestly, there are just so many better alternatives!

Have a multi-page form where you want to accumulate results for a single submit? Hidden fields with values from the previous page form elements will use client storage for this data.

Cookies are another way to push the storage to the browser, keep it off of the server and keep your server side code stateless.

Data storage in a NoSQL database like Mongo is very popular, can be shared across the cluster (no replication) and, since it is schemaless, can easily persist incomplete data. 

It's even possible to this in a relational database too.

So don't be lured in by the Siren's song. Avoid the doom they offer, and avoid session storage at all costs!

David H Nebinger 2018-09-06T00:31:00Z
Categories: CMS, ECM

Robots on steroids: RPA is here to stay

SnapLogic - Wed, 09/05/2018 - 13:20

Previously published on ITProPortal.  Few narratives have captured the imaginations of business leaders more than robots—bots tirelessly doing work faster than their human colleagues on a 24/7 constant basis. Leaving aside for the moment the socioeconomic ramifications of robots reducing job opportunities, the fact remains that inventions like Robotic Process Automation, or RPA, make life[...] Read the full article here.

The post Robots on steroids: RPA is here to stay appeared first on SnapLogic.

Categories: ETL

The Good, The Bad and The Ugly

Liferay - Wed, 09/05/2018 - 11:34
The Ugly

In one of the first Liferay projects I ever did, I had a need to have some Roles in the environment. They needed to be there so I knew they were available for a custom portlet to assign when necessary.

I was working for an organization that had a structured Software Configuration Management process that was well defined and well enforced.

So code deployments were a big deal. There were real hard-copy deployment documents that itemized everything the Operations folks needed to do. As the developer, I didn't have access to any of the non-development environments, so anything that needed to be changed in the environment had to be part of a deployment and it had to be listed in the deployment documents.

And it was miserable. Sometimes I would forget to include the Role changes. Sometimes the Operations folks would skip the step I had in the docs. Sometimes they would fat-finger the data input and I'd end up with a Roel instead of my expected Role.

The Bad

So I quickly researched and implemented a Startup Hook.

For those that don't know about them, a Startup Hook was a Liferay-supported way to run code either at container start (Global) or at application start (Application). The big difference is that a container start only happens once, but an Application start happens when the container starts but also every time you would redeploy your hook.

Instead of having to include documentation telling the Operations folks how to create the roles I needed, my application startup hook would take care of that.

There were just three issues that I would have to code around:

The code runs at every startup (either Global or Application), so you need to check before doing something to see if maybe you had already completed the action. So I wouldn't want to keep trying to (and failing to) add a duplicate role, I have to check if it is there and only add it if it is not found.

There is no "memory" to work from, so each implementation must expect it could be running in any state. I had a number of roles I was using and I would often times need to add a couple of more. My startup action could not assume there were no roles, nor could it assume that the previous list of roles was done and only the newer ones needed to be added. No, I had to check each role individually as it was the only way to ensure my code would work in any environment it was deployed to.

The last issue, well that was a bug in my code that I couldn't fix. My startup action would run and would create a missing role. That was good. But if an administrator changed the role name, the next time my startup action would run, it would recreate the role. Because it was missing, see? Not because I had never created it, but because an administrator took steps to remove or rename it.

That last one was nasty. I could have taken the approach of populating a Release record using ReleaseLocalService, but that would be one other check that my startup action would need to perform. Pretty soon my simple startup action would turn into a development effort on its own.

The Good

With Liferay 7 CE and Liferay DXP, though, we have something better - the Upgrade Process. [Well, actually it was available in 6.2, but I didn't cotton to it until LR7, my bad]

I've blogged about this before because, well, I'm actually a huge fan.

By using an upgrade process, my old problem of creating the roles is easily handled. But even better, the problems that came from my startup action have been eliminated!

An upgrade process will only run once; if it successfully completes, it will not execute again. So my code to create the roles, it doesn't need to see if it was already done because it won't run twice.

An upgrade process has a memory in that it only runs the steps necessary to get to the latest version. So I can create some roles in 1.0.0, create additional roles in 1.1.0, I could remove a role in 1.1.1, add some new roles in 1.2.0... Regardless of the environment my module is deployed to, only the necessary upgrade processes will run. So a new deployment to a clean Liferay will run all of the upgrades, in order, but a deployment to an environment that had 1.1.1 will only execute the 1.2.0 upgrade.

The bug from the startup action process? It's gone. Since my upgrade will only run once, if an admin changes a role name or removes it, my upgrade process will not run again and recreate it.


Besides being an homage to one of the best movies ever made, I hope I've pointed to different ways that over time have been supported for preparing an environment to run your code.

The ugly way, the manual way, well you really want to discard this as it does carry a risk of failure due to user error.

The bad way, well it was the only way for Liferay before 7, but it has obvious issues and a new and better process exists to replace them.

The good way, the upgrade process, is a solid way to prepare an environment for running your module.

A good friend of mine, Todd, recently told me he wasn't planning on using an upgrade process because he was not doing anything with a Liferay upgrade.

I think this suggests that the "upgrade process" perhaps has an invalid name. Upgrade processes are not limited to Liferay upgrades.

Perhaps we should call them "Environment Upgrade Processes" as that would imply we are upgrading the environment, not just Liferay.

What do you think? Do you have a better name? If so, I'd love to hear it!

David H Nebinger 2018-09-05T16:34:00Z
Categories: CMS, ECM

How to Use Ecommerce Data to Understand Customer Behavior

PrestaShop - Wed, 09/05/2018 - 10:53
Understanding customer behavior is one of the best ways to create an ecommerce strategy around customer satisfaction.
Categories: E-commerce

Why You Need Local Payment Options When Going Global

PrestaShop - Wed, 09/05/2018 - 05:26
We’ve all been there. How many times have you been browsing a site, found the perfect item, and looked down to see it promoted in a currency that’s not yours?
Categories: E-commerce

Liferay IntelliJ Plugin 1.1.1 Released

Liferay - Tue, 09/04/2018 - 22:40


Liferay IntelliJ 1.1.1 plugin has been made available today. Head over to this page for downloading.


Release Highlights:


  • Watch task decorator improvements

  • Support for Liferay DXP Wildfly and CE Wildfly 7.0 and 7.1

  • Better integration for Liferay Workspace

  • Improved Editoring Support

    • Code completion for resource bundle keys for Liferay Taglib

    • code completion and syntax highlighting for Javascript in Liferay Taglibs

    • Better Java editor with OSGi annotations


Using Editors






Special Thanks

Thanks to Dominik Marks for the improvements.

Yanan Yuan 2018-09-05T03:40:00Z
Categories: CMS, ECM

The SnapLogic Patterns Catalog is here!

SnapLogic - Tue, 09/04/2018 - 12:50

Building integration pipelines is quick and easy with SnapLogic’s clicks-not-code approach. Many SnapLogic users build simple and complex pipelines within minutes. Recently made available to all SnapLogic users, the SnapLogic Patterns Catalog is a library of pipeline templates that can be used to shave off even more time by eliminating the need to build pipelines[...] Read the full article here.

The post The SnapLogic Patterns Catalog is here! appeared first on SnapLogic.

Categories: ETL


Drupal - Mon, 09/03/2018 - 11:46
Completed Drupal site or project URL: business registration in North Rhine-Westphalia

Since the 1st of July 2018, the new "Gewerbe-Service-Portal.NRW" has been providing citizen-friendly eGovernment by allowing company founders in the German federal state North Rhine-Westphalia (NRW) to electronically register a business from home. The implementation was carried out by publicplan GmbH on behalf of d-NRW AöR. With the aid of a clearly arranged online form, commercial registration can be transmitted to the responsible public authorities with just a few clicks. Furthermore, an integrated chatbot helps the user with questions.

Service portal

In addition to the business registration, the portal offers information to the topic “foundation of an enterprise”. Furthermore, users have access to all service providers of the "Einheitliche Ansprechpartner NRW" (EA NRW). The online service supports specialised staff in taking up a service occupation or professional authentification. The search for a competent trading supervision department can also occur via the “Verwaltungssuchmaschine” (VSM) that was developed by d-NRW and publicplan GmbH on behalf of the “Ministerium für Wirtschaft, Innovation, Digitalisierung und Energie NRW” (MWIDE). The VSM is a search engine specialized for information about the public sector.

Business registration together with Chatbot “Guido“

"Guido" is a smart dialogue assistant for questions. He ensures automatic retrievability of information in plain language and is also able to identify each business type by a key. The chatbot determines every suitable business type by approaching the key through request of information. After successful determination, it is automatically transmitted to the form. Therefore, “Guido” saves the complicated search for many similar types of business. The director of publicplan GmbH, Dr. Christian Knebel says: "Thanks to our numerous eGovernment projects, we can draw on a wealth of experience in order to implement such a comprehensive portal. publicplan's integrated chatbot technology is the perfect complement to a contemporary citizen service."

Categories: CMS

Meet The Extenders

Liferay - Mon, 09/03/2018 - 11:40

As I spend more time digging around in the bowels of the Liferay source code, I'm always learning new things.

Recently I was digging in the Liferay extenders and thought I would share some of what I found.

Extender Pattern

So what is this Extender pattern anyway? Maybe you've heard about it related to Liferay's WAB extender or Spring extender or Xyz extender, maybe you have a guess about what they are but as long as they work, maybe that's good enough. If you're like me, though, you'd like to know more, so here goes...

The Extender Pattern is actually an official OSGi pattern.  The simplest and complete definition I found is:

The extender pattern is commonly used to provide additional functionality at run time based on bundle content. For this purpose, an extender bundle scans new bundles at a certain point in their life cycles and decides whether to take additional actions based on the scans. Additional actions might include creating extra resources, instantiating components, publishing services externally, and so forth. The majority of the functionality in the OSGi Service Platform Enterprise Specification is supported through extender bundles, most notably Blueprint, JPA, and WAB support. - Getting Started with the Feature Pack for OSGi Applications and JPA 2.0 by Sadtler, Haischt, Huber and Mahrwald.

But you can get a much larger introduction to the OSGi Extender Model if you want to learn about the pattern in more depth.

Long story short, an extender will inspect files in a bundle that is starting and can automate some functionality. So for DS, for example, it can find classes decorated with the @Component annotation and work with them. The extender takes care of the grunt work that we, as developers, would otherwise have to keep writing to register and start our component instances.

Liferay actually has a number of extenders, let's check them out...


This extender is responsible for wiring up the HttpTunnel servlet for the bundle. If the bundle has a header, Http-Tunnel, it will have a tunnel servlet wired up for it. It will actually create a number of supporting service registrations:

  • AuthVerifierFilter for authentication verification (verification via the AuthVerify pipeline, includes BasicAuth and Liferay session auth).
  • ServletContextHelper for the servlet context for the bundle.
  • Servlet for the tunnel servlet itself. For those that don't know, the tunnel servlet allows for sending commands to a remote Liferay instance via the protected tunnel servlet and is a core part of how remote staging works.

Yep, the theme contributors are implemented using the Extender pattern.

This extender looks for either the Liferay-Theme-Contributor-Type header from the bundle (via your bnd.bnd file) or if there is a package.json file, it looks for the themeContributorType. It also ensures that resources are included in the bundle.

The ThemeContributorExtender will then register two services for the bundle, the first is a ThemeContributorPortalWebResources instance for exposing bundle resources as Portal Web Resources, the other is an instance of BundleWebResources to actually expose and return the bundle resources.


The configurator extender is kind of interesting in that there doesn't seem to be any actual implementations using this.

Basically if there is a bundle header, Liferay-Configuration-Path, the configurator extender will basically use properties files in this path to set configuration in Configuration Admin. I'm guessing this may be useful if you wanted to force a configuration through a bundle deployment, i.e. if you wanted to push a bundle to partially change the ElasticSearch configuration.


The language extender handles the resource bundle handling for the bundle. If the bundle has the liferay.resource.bundle capability, it creates the extension that knows how to parse the capability string (the one that often includes base names, aggregate contexts, etc) and registers a special aggregating Resource Bundle Loader into OSGi.


This is the magical extender that makes ServiceBuilder work in Liferay 7 CE and Liferay 7 DXP... This one is going to take some more area to document.

The extender works for all bundles with a Liferay-Spring-Context header (pulled in from bnd.bnd of course).

First the ModuleApplicationContextExtender creates a ModuleApplicationContextRegistrator instance which creates a ModuleApplicationContext instance for the bundle that will be the Spring context the modules beans are created in/from. It uses the Liferay-Service and Liferay-Spring-Context bundle headers to identify the Spring context xml files and the ModuleApplicationContext will be used by Spring to instantiate everything.

It next creates the BeanLocator for the bundle (remember those from the Liferay 6 days? It is how the portal can find services when requested in the templates and scripting control panel).

The ModuleApplicationContextRegistrator then initializes the services and finally registers each of the Spring beans in the bundle as OSGi components (that's why you can @Reference SB services without them actually being declared as @Components in the SB implementation classes).

The ModuleApplicationContextExtender isn't done yet though. The Spring initialization was to prepare a dynamically created component, managed by the OSGi Dependency Manager so that OSGi will take over the management (lifecycle) of the new Spring context.

If the bundle has a Liferay-Require-SchemaVersion header, the ModuleApplicationContextExtender will add the requirement for the listed version as a component dependency. This is how the x.y.z version of the service implementation gets bound specifically to the x.y.z version of the API module. It is also why it is important to make sure that the Liferay-Require-SchemaVersion header is kept in sync with the version you stamp on the API module, and also why it is also important to actually remember to bump your module version numbers when you change the service.xml file and/or the method signatures on your entity or service classes.

The  ModuleApplicationContextExtender has a final responsibility, the initial creation of the SB tables, indexes and sequences. Normally when we are dealing with upgrades, we must manually create our UpgradeStepRegistrator components and have them register upgrade steps to be called when a version is changing. But the one upgrade step we never have to write is the ServiceBuilder's 0.0.0 to x.y.z upgrade step. The ModuleApplicationContextExtender automatically registers the upgrade step to basically apply the scripts to create the tables, indexes and sequences.

So yeah, there's a lot going on here. If you wondered how your service module gets exposed to Liferay, Spring and OSGi, well now you know.


I actually think this extender is poorly named (my own opinion). The name WabFactory implies (to me) that it may only be for WABs, but it actually covers generic Servlet stuff as well.

Liferay uses the OSGi HTTP Whiteboard service for exposing all servlets, filters, etc. Your REST service? HTTP Whiteboard. Your JSP portlet? Yep, HTTP Whiteboard. The Theme Contributor? Yep, same. Simple servlets? Yep. Your "legacy" portlet WAR that gets turned into a WAB by Liferay, it too is exposed via the HTTP Whiteboard.

This is a good thing, of course, because the dynamism of OSGi Declarative Services, your various implementations can be added, removed and restarted without affecting the container.

But, what you may not know, the HTTP Whiteboard is very agnostic towards implementation details. For example, it doesn't talk about JSP handling at all. Nor the REST handling, MIME types, etc. It really talks to how services can be registered such that the HTTP Whiteboard will be able to delegate incoming requests to the correct service implementations.

To that end, there's a series of steps to perform for vanilla HTTP Whiteboard registration. First you need the Http service reference so you can register. You need to create a ServletContextHandler for the bundle (to expose your bundle resources as the servlet context), and then you use it to register your servlets, your filters and your listeners that come from the bundle (or elsewhere). That's a lot of boilerplate code that deals with the interaction with OSGi; if you're a servlet developer, you shouldn't need to master all of those OSGi aspects.

And then there's still JSP to deal with, REST service, etc. A lot of stuff that you'd need to do.

But we don't, because of the WabFactory.

It is the WabFactory, for example, that processes the Web-ContextPath and Web-ContextName bundle headers so we don't have to decorate every servlet with the HTTP Whiteboard properties. It registers the ServletContextHelper instance for the bundle and also binds all of the servlets, filters and listeners to the servlet context. It registers a JspServlet so the JSP files in the bundle can be compiled and used (you did realize that Liferay is compiling bundle JSPs, and not your application container, right?).

There's actually a lot of other functionality baked in here, in the portal-osgi-web-wab-extender module, I would encourage you to dig through it if you want to understand how all of this magic happens.  (7.1)

This is a new extender introduced in 7.1.

This extender actually uses two different bundle headers, Liferay-JS-Resources-Top-Head and Liferay-JS-Resources-Top-Head-Authenticated to handle the inclusion of JS resources in the top of the page, possibly using different resources for guest vs authenticated sessions.

This new extender is basically deprecating the old javascript.barebone.files and javascript.everything.files properties from and allows modules to supply their own scripts dynamically to include in the HTML page's <head /> area.

You can actually see this being used in the new frontend-js-web module's bnd.bnd file.  You can override using a custom module and a Liferay-Top-Head-Weight bundle header to define a higher service ranking.


So we started out with a basic definition of the OSGi Extender pattern. We then made a hard turn towards the concrete implementations currently used in Liferay.

I hope you can see how these extenders are actually an important part of Liferay 7 CE and Liferay 7 DXP, especially from a development perspective.

If the extenders weren't there, as developers we would need to be generating a lot of boilerplate code. If you tear into the ModuleApplicationContextExtender and everything it is doing for our ServiceBuilder implementations, stop and ask yourself how productive your day would be if you had to wire that all up ourselves? How many bugs, etc would we otherwise have had?

Perhaps this review of Liferay Extenders may give you some idea of how you can eliminate boilerplate code in your own projects...

David H Nebinger 2018-09-03T16:40:00Z
Categories: CMS, ECM

BDaaS: Taking the pain out of big data deployment

SnapLogic - Fri, 08/31/2018 - 12:49

There’s Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), even Infrastructure-as-a-Service (IaaS). Now, in the quest to make big data initiatives more accessible to mainstream customers, there’s a new As-A-Service offering that offloads the heavy lifting and capital expenditures associated with Big Data analytics. Organizations from global retail giants to specialty manufacturers are mixing sales data, online data, Internet[...] Read the full article here.

The post BDaaS: Taking the pain out of big data deployment appeared first on SnapLogic.

Categories: ETL

Making the most of marketing automation by installing the MailChimp module on your PrestaShop store.

PrestaShop - Fri, 08/31/2018 - 03:57
In recent years your customer’s needs have changed, and, as an etailer, one of your main challenges is sending your account holders targeted, qualitative marketing campaigns.
Categories: E-commerce
Syndicate content