Assistance with Open Source adoption

Open Source News

Holistic Collaboration

Drupal - Fri, 12/01/2017 - 15:44

The following blog was written by Drupal Association Premium Supporting Partner, Wunder Group.

For the past couple of years I have been talking about the holistic development and operations environments at different camps. As this year’s highlight,  I gave  a session in DrupalCon Vienna around that same topic. The main focus of my talk has been on the improvement opportunities offered by a holistic approach, but there is another important aspect I’d like to introduce: The collaboration model.

Every organization has various experts for different subject matters. Together these experts can create great things. As the saying goes “the whole is more than the sum of its parts”, which means there is more value generated when those experts work together, than from what they would individually produce. This however, is easier said than done. 

These experts have usually worked within their own specific domains and for others their work might seem obscure. It’s easy to perceive them as “they’re doing some weird voodoo” while not realizing that others might see your work in your own domain the same way. Even worse, we raise our craft above all others and we look down at those who do not excel in our domain.

How IT people see each other:

But each competence is equally important. You can’t ignore one part and focus on the other. The whole might be greater than the sum of its parts, but the averages do factor in. Let's take for example a fictional project. Sales people do great job getting the project in, upselling a great design. The design team delivers and it gets even somehow implemented, but nobody remembered to consult the sysops. Imagine, the pinnacle of web design, but launched on this:

(Sorry for the potato quality).

Everything needs to be in balance. The real value added is not in the work everybody does individually, but in what falls in between. We need to fill those caps with cooperation to get the best value out of the work. 

So how do you find the balance?

The key to find balance and to get the most out of the group of experts is communication and collaboration. There needs to be active involvement from every part of the organization right from the start to make sure nothing is left unconsidered. The communication needs to stay active throughout the whole project. It is important to speak the same language. I know it’s easy to start talking in domain jargon. And every single discipline has their own. The terms might be clear to you, but remember that the other party might not have ever heard of it. So pay attention to the terms you use. 

“Let's set the beresp ttl down to 60s when the request header has the cache-tag set for all the bereq uris matching /api/ endpoint before passing it to FastCGI” - Space Talk

Instead of looking down to each other we should see others like they see themselves. Respect both their knowledge and the importance of their domain.

How IT people should see each other: 
(Sysadmins: not because we like to flip at everybody, but because Linus, the guy who can literally change the source code of the real world Matrix, the Web.)

Everybody needs to acknowledge the goal and work towards it together. Not just focus on their own area but also make sure their work is compatible with that of others. There needs to be a shared communications channel where everyone can reach each other. It should also be possible to communicate directly to those people who know best without having any hierarchy to go through. The flat organization structure doesn’t only mean you can contact higher ups directly, but also that you can contact any individual from different area of expertise directly.

By collaboration you can also eliminate redundancies. It can be so that there are different units doing overlapping work, both with their own ways. Or there could be a team that is doing the work falling in between two other teams. A good example of this is the devops team. Only in a very large organization there is probably enough work to actually have a dedicated team taking care of that work. As devops need to know something about both the development and the operations side they need to be experts on multiple areas. Still there are project specific stuff they need to adjust. This means communication between the development and devops team. Likewise this same information needs to be passed to sysops team. The chain could be even longer, but it’s already easy to play a chinese whispers, or telephone, over such a short distance. And there is probably nothing devs and ops couldn’t do together when communicating directly and working together to fill those gaps. 

Working together, not only to get things done, but also to understand each other gives a lot of benefits with a small work. Building silos and only doing improvement inside them will only widen the cap and all the benefits gained from such improvements will disappear when you need to start filling the gaps between them. Now, go and find a way to do something better together!

Written by Janne Koponen.

Categories: CMS

Blueprint and OSGi Fragments for granular module extension

Liferay - Thu, 11/30/2017 - 15:33

OSGi Fragments are a really powerful tool to extend the features of an existing bundle. Indeed, since fragments share classpath with their host bundle, they allow us to implement new functionalities avoiding unnecessary resource replications. Nevertheless, there are two main limitations that must be taken into account:

  • since fragments never reach the Active status, they are not able to neither register new services nor to override existing ones with an higher ranking
  • despite the classpath sharing feature, fragments cannot override host resources, because host context is searched before fragment context. Liferay JSP fragments are explicitly forced to behave differently by the JspFragmentTrackerCustomizer embedded into JspServlet, but this is not the default

So, what if we have to implement a new service that needs some existing resources located on a third-party bundle (i.e. a set of JSPs, a Soy template, language properties and so on)? Here comes the power of Blueprint framework. Indeed, if we declare our service via blueprint on our fragment, it will be injected into the host classpath when the fragment is attached and will be activated when the host bundle reaches it's Active state.

I will use the simple but practical example below in order to better clarify this concept.

Use case example

Few days ago, a friend of mine asked me how to extend a DDMFormFieldType component in order to make the indexable parameter configurable from UI. Therefore, such option is usually hidden by the visibilityExpression setting contained into DefaultDDMFormFieldTypeSettings implementation.

Obviously, in order to achieve our goal it's necessary to create a new DDMFormFieldTypeSettings class containing an indexType() method annotated with @DDMFormField(visibilityExpression = "TRUE") and tell our DDMFormFieldType component to refer to such settings file. The point here is: how can we do that best, by making the most of the OSGi paradigm modularity? Let's examine the possible (and not possible) solutions:

  • Create a new bundle that implements an higher ranked version of the DDMFormFieldType service. This is the most obvious solution, but with this implementation we are forced to duplicate also js resources and language properties. It should be our last choice, if we can't do anything better. But I'm sure we can ^_^
  • We don't want to replicate resources, so we should try with an OSGI Fragment and its wonderful classpath sharing feature. We could maybe create a fragment and override the settings file, like we did with Ext Plugin in previous versions of Liferay. This solution DOESN'T WORK, because the host context is searched before the fragment context, so original resources always take precedence
  • Therefore, we are forced to implement also our DDMFormFieldType service and inject the new settings file into it. It's no big deal after all. We can simply create our component and set an higher service ranking on it's properties, and Bob's your uncle. Again, this solution DOESN'T WORK, because since fragments do not reach the Active state, their services are not started
  • Well, our last solution is the good one and it's.....a combination of OSGi Fragments and Blueprint Framework. What an astonishing turn of events!!! With this strategy, we only need 3 files to achieve our goal:
    • Our custom DDMFormFieldTypeSettings class, with @DDMFormField(visibilityExpression = "TRUE") annotation on indexType() method
    • Our DDMFormFieldType class, without the component annotation, that refers to our settings file
    • An xml file, contained into the src/main/resources/OSGI-INF/blueprint folder, where we register our DDMFormFieldType implementation as a service with an higher ranking than the original. A simple tutorial on how to define services with blueprint framework syntax can be found here

An implementation of this simple example, realised for Liferay 7 CE ga5, can be found at In our case, we extended the ParagraphDDMFormFieldType service, but the same can be done for the other field types. Unfortunately, Liferay 7 CE ga5 doesn't come with an OOTB implementation of Blueprint, so we have to first install all required bundles in order to make it work. This process deserves a dedicated section: the next one.

Blueprint integration

Since Liferay has not an integrated implementation of the Blueprint Framework, we have to install a third-party library. Despite Eclipse Gemini Blueprint is the reference implementation for the OSGi Blueprint service, we chose to integrate Apache Aries Blueprint libraries for a couple of reasons:

  • As reported by LPS-56983, nobody is taking care of the Eclipse Gemini Blueprint project and it's not evolving at all. On the contrary, Apache Aries seems to have a more active community contributing to it
  • Apache Aries suite appears interesting also for other features, as the OSGi Subsystems described by David H Nebinger in this blog post

In order to install Apache Aries Blueprint suite in our Liferay environment, we only need to copy few bundles into the deploy folder:

Into the org-apache-aries-blueprint folder of the example repo there is the set of bundles that we used to enable Blueprint framework on our Liferay instance. Therefore, in order to run the example, you only have to:

  • Copy bundles from org-apache-aries-blueprint folder to your ${LIFERAY_HOME}/deploy folder
  • Deploy the fragment module on your Liferay instance

If the Fragment is correctly resolved, trying to add a new paragraph field to a form and clicking on Show More Options, you should see something like this


With this article, I'd like to bring your attention on how the combination of Blueprint Framework and OSGi Fragments can enhance even more the Liferay extensibility, giving to developers a powerful, versatile and easy-to-use tool to extend Liferay core, that is fully supported by the OSGi standard. Nevertheless, if you can find any drawbacks with this approach or if you have better ideas on how to handle such scenarios, please share them.
Happy Liferay coding!!!


Iacopo Colonnelli 2017-11-30T20:33:58Z
Categories: CMS, ECM

First KNIME Users Meetup in Dresden

Knime - Thu, 11/30/2017 - 08:19
First KNIME Users Meetup in Dresden heather.fyson Thu, 11/30/2017 - 14:19 January 9, 2018 Max-Planck-Institut für Molekulare Zellbiologie und Genetik, Pfotenhauerstraße 108, 01307 Dresden

Dresden is an Eldorado for data mining and many local analysts use KNIME in their day-to-day work. This is why we have put together the first official KNIME Meetup in Dresden at the impressive MPI-CBG building on January 9, 2018.

As a small group of data science enthusiasts and part of the KNIME Community since 2011 — you might have heard of the Selenium and Palladian Nodes — we are very interested in learning about the small and impressive, obscure and obvious discoveries others are doing with KNIME or have developed in the context of KNIME. Our focus is both applications of KNIME, and development of new nodes and their functionalities. Furthermore, if you have recently discovered KNIME and still feel overwhelmed by its endless possibilities, it is a great platform to receive support and advice from experts (or from us).

We are happy to announce that Kilian Thiel from KNIME will talk about recent developments and upcoming features in KNIME. Thereafter, we will give an overview of our development activities. We would also like to hear more about your projects and breakthroughs. Please feel free to contact us in advance to briefly introduce your activities during the meeting. If interest for a specific topic arises during the Meetup, we suggest talking about it afterwards, while enjoying some food and drinks.

We are looking forward to seeing you on January 9, 2018!

Register Now

  • 19:00 Welcome
  • 19:10 Presentation by Kilian Thiel from KNIME
  • 19:40 New and Noteworthy from the Dresden Community
  • 20:00 Community Activities — Where to go from here?
  • 20:15 Get Together

Categories: BI

Personalization vs SEO: Common Pitfalls of Creating Personalized Experiences

Liferay - Tue, 11/28/2017 - 13:55

Content and online experience personalization are at the forefront of website innovations, providing users with customer journeys that react to their specific interests and actions. But the drive to create more personalized experiences in public-facing websites can sometimes have unintended side effects, such as negatively impacting search engine optimization (SEO) when implemented improperly. The side effects caused by mistakes including generating multiple URLs for a single page, drastically shifting a page’s content to meet individual users and altering webpage layout from user to user can all prevent search engine bots from properly crawling your site’s pages and ranking them well in results.

Thankfully, personalized websites can still maintain healthy SEO efforts while providing visitors with dynamic, engaging experiences. The following guidelines will help you better understand the technical approaches to avoiding common mistakes that hurt your search rankings while creating a personalized website.

Clear communication is the key to a successful website and takes two forms: gathering an audience and guiding an audience. SEO helps to gather people looking for something a company offers and bring them to a website, while website personalization strategies help to guide them to what they want while on the site.

The Benefits of Website Personalization

Optimizely defines personalization as “the process of creating customized experiences for visitors to a website … (providing) unique experiences tailored to their needs and desires.” Personalization utilizes demographics, such as age and gender, to present products and services that could be right for a targeted customer, as well as context, which includes location, to match visitors with information related to their region. In doing so, a generic site quickly begins to match the unique interests of each individual audience member. Past behavior on a site can also inform personalization, as seen in the way Amazon presents items on its homepage based on expressed interests.

Through localization, a website’s personalization strategies go beyond translating content and targeting a specific language, culture or region. This includes changing the examples used in content to match the interests and needs of target audiences, such as swapping out case studies to fit the region being targeted. In doing so, a personalized website can most effectively communicate its message to the unique demands of each type of audience.

Gathering Your Audience

Every company with an online presence must face the challenge of getting as much of the correct audience to their website as possible. The following two sets of actions can improve the SEO of a brand’s personalized online experiences.

Ensure Site Indexing
  • Beware of Crawling Errors - Look out for unindexed pages, which will prevent search engines from crawling parts of a website and ranking them in search results. Multiple URLs can cause search engine penalizations for what seems like duplicate content as well as splitting resources, causing users to go to different URLs for the same page and hurting search results for the correct landing page.
  • Utilize Your Sitemap - Check your sitemap and make sure it is accessible by Google and correctly populated with the right pages for effective crawling for accurate search results. A well-structured sitemap plays an important role in proper page ranking.
  • Leverage Fetch as Google - Manually submit your page to Google Indexing, ensuring that the pages you want to be ranked in search engine results are properly crawled and preventing incorrect dynamically generated instances from becoming the results seen for your pages.
Optimize Your Metadata
  • Canonical URLs - Using canonical URLs within your coding will help Google crawl the correct page in the event that multiple URLs are generated for the same page, which can result from dynamic content or custom resources. Doing so will have a positive effect on rankings and direct visitors toward the correct version of your landing page while still providing needed personalization.
  • Use Localization in Programming - The use of HREFLANG link attributes will help enable localization by automatically providing the correct version of a page based on region and language. This can prevent incorrect personalization that leads to visitor frustrations and high bounce rates. These efforts are all part of effectively gathering a target audience and ensuring that the right eyes are getting on a website, which is where personalization efforts will help provide successful experiences for each visitor.
Guiding Your Visitors

Once visitors have arrived on your website, it is important to leverage on-page personalization in a way that provides targeted experiences while not negatively impacting your SEO efforts. Content targeting helps guide and direct users through a site by matching the content on a page to user interest, however, companies should beware of certain pitfalls to better guide website visitors.

  • Avoid Overtargeting - Websites should switch out content based on users, but be careful to not change the content too much. Doing so will prevent Google from knowing what is trying to be communicated on the page. In the event that Google crawls the page and is given content personalized to one specific user, the content generated when targeting a different user persona will become unindexable.
  • Define Functional Page Areas - Break down web pages into sections that cover the concept of the page, why the topic matters to the reader and where the service can be found. While this can somewhat change from page to page, keeping the design and structure fairly consistent can help make personalization easy but still fall in line with other pages on the site. In doing so, visitors will be less likely to be confused when navigating from page to page, hopefully decreasing your website’s bounce rate and having a positive impact on your site’s search engine rankings.
  • Plan Out Dynamic Content - Determine which aspects of your page will be made dynamic in order to suit the reader and what will determine changes in content. For example, one section of a page can be based on localization to correctly match language and references to the region of the reader while another can be based on user profile and page history to match up products, calls to action or links to related content with their unique interests.

With an effective search engine optimization strategy that works in tandem with personalized website experiences, a company’s online presence can both find and retain its target audience members.

Meet Your Audiences With a Modern Website

Modern websites that not only create personalized experiences, but also reach their target audiences, can positively affect companies year after year. Learn more about creating great experiences today.

Read “Three Key Strategies for Consistent Customer Experiences.”   Ryan Schuhler 2017-11-28T18:55:12Z
Categories: CMS, ECM

Cuatro preguntas para analizar la salud de tu intranet

Liferay - Tue, 11/28/2017 - 11:55

¿Cuándo ha sido la última vez que has comprobado el valor que atribuyen tus empleados a la intranet de tu empresa? Frecuentemente, tras adoptar un software de intranet, las compañías ya no vuelven a invertir en este aspecto de la organización y solo lo hacen cuando, por ejemplo, el proveedor les obliga a actualizar a la siguiente versión. Sin embargo, para que se pueda sacar provecho de todas las funcionalidades y ventajas de una intranet, es necesario analizar periódicamente la calidad de la experiencia de usuario de estas herramientas, y, siempre que sea posible, pedir a los empleados feedback para mejorar esta experiencia.

Los equipos responsables de las intranets suelen analizar métricas como: el número de usuarios que acceden diariamente, o la frecuencia en la que se publican nuevos contenidos. Estos datos pueden ser bastante útiles, pero fijarse exclusivamente en este tipo de cifras no es el suficiente para sacar conclusiones sobre el nivel de compromiso de los usuarios con la intranet de la organización.

Si tu empresa tiene una intranet y estás pensando en actualizarla, las siguientes cuatro preguntas te van a ayudar a descubrir si el sistema que tienes hoy es realmente útil y relevante o si quizás sea el momento de renovarlo y mejorar la experiencia de tus empleados.

Pero, antes de empezar…

Es importante hacer un ejercicio previo, y que apuntes cuáles son las respuestas que esperas tener para estas preguntas. Al analizar el proyecto de una intranet, hay que tener en cuenta que éstas deben ser construidas teniendo como referencia a las necesidades y experiencia de sus usuarios finales. No obstante, la realidad hoy es que muchos de estos proyectos, más antiguos, están hechos con plataformas que ofrecen poca flexibilidad en sus funcionalidades y que están diseñados únicamente en base a los objetivos específicos de las compañías. Modernizar tu sistema de intranet pasa por estar alineado con las necesidades de tus usuarios. Si analizas las cuatro preguntas y ves que las respuestas son muy distintas de lo que esperabas, entonces tu primera tarea será sentarte con tus empleados y entender exactamente cuáles son sus verdaderas necesidades.

Cuatro preguntas rápidas para evaluar tu intranet
  1. ¿Con qué frecuencia visitan tus usuarios la intranet? Bien sea para enterarse de las novedades de la empresa o para participar en algún proyecto colaborativo, como referencia, puedes tener en cuenta que una visita diaria al portal significa un nivel básico de compromiso. Debes analizar si el porcentaje de visitas diarias que esperabas se parece al número real. Este dato te va a ayudar a saber si estás alineado con tus usuarios.
  2. ¿Los usuarios hacen publicaciones, comentarios o interactúan de alguna manera con la plataforma de forma regular? El punto óptimo de interacción en una intranet es aquel en el que los contenidos estén generados por sus usuarios. Las mejores intranets consiguen tener éxito sin que haga falta que alguien del equipo de gestión (alguien de Recursos Humanos o el responsable de la intranet) interfiera directamente. En el caso de esta segunda pregunta, no debes esperar a que los usuarios publiquen contenidos todos los días, el objetivo aquí es la calidad, y no la cantidad. Si observas picos en la generación de contenidos durante algunos eventos de la compañía y luego periodos en los que no se publica nada, te debes hacer la siguiente pregunta: ¿Por qué no están publicando con más periodicidad? 
  3. ¿Atrae la intranet a gente de todos los departamentos? Si segmentas los usuarios activos del portal por departamentos, deberías encontrar una distribución equilibrada. Antiguamente, las intranets eran separadas por equipo, pero actualmente son accesibles por toda la organización. Asegurar que todos los departamentos están utilizando el portal ayuda a que la información llegue a toda la empresa y que no haya silos de comunicación. Además, eso también permite que un área pueda contactar a otra con facilidad, aunque los empleados no se conozcan personalmente.
  4. ¿Cuántos procesos de la compañía son completados con herramientas externas? Bien sea para colaboración, enviar documentos o algún otro proceso, tu equipo probablemente utiliza herramientas externas para realizar algunas tareas. Esto no tiene por qué ser un problema, al no ser que ya cuentes con herramientas corporativas para esas mismas tareas. Si tus empleados prefieren la herramienta externa a la interna, algo falla en la experiencia del usuario. Así, este es un buen punto por dónde empezar a hacer cambios.


Los avances se producen cuando identificas tu punto de partida

Todas las intranets tienen margen de mejora, especialmente cuando se producen cambios en las necesidades y expectativas del usuario. No obstante, al comenzar el proceso de medición y análisis, es imprescindible tener una visión clara de dónde se encuentra tu intranet al respecto de tus objetivos. De esta forma tendrás una idea clara de tu situación y podrás alinear tu plan y estrategia con los mismos. Y no desperdiciarás tus recursos en funcionalidades y herramientas que no van a tener un impacto en la experiencia de usuario.

Si deseas desarrollar un plan para mejorar tu portal, pero encuentras dificultades para convencer al equipo de gestión, hacer un benchmark de tu intranet y compararla con otras opciones o con intranets de la competencia puede ser de gran utilidad. Hay muchas guías online disponibles que te van ayudar a hacerlo. Otra opción es contratar a una empresa para hacer una investigación de mercado de los proveedores de soluciones de intranet. Finalmente, debes analizar las informaciones que obtienes internamente en tu compañía, ya que el foco de todo este proyecto debe estar en mejorar constantemente la experiencia de tus empleados, es decir, tener su satisfacción en el centro de tu estrategia.

¿Qué hace que una Intranet tenga éxito?

La colaboración social, la facilidad de uso y la integración empresarial son tres elementos clave para el éxito de una intranet.

Descubre cómo las Intranets de Liferay pueden ayudarte   Rebeca Pimentel 2017-11-28T16:55:27Z
Categories: CMS, ECM

Meet Mohamed & Mohamed, Ambassadors of the month | November 2017

PrestaShop - Tue, 11/28/2017 - 09:33
Mohamed Marrouchi and Mohamed Melki form a team of Ambassadors in Tunis and regularly organize meetups together. They’re always keen on helping the local PrestaShop community!
Categories: E-commerce

8 Buenas prácticas para su estrategia omnicanal

Liferay - Tue, 11/28/2017 - 04:07

Las estrategias omnicanales de éxito dependen de experiencias consistentes en todas las interacciones del customer journey. Algo que, al conseguirse, proporciona grandes beneficios tanto para el ámbito interno como para el externo de tu organización, incluidos los procesos de back-end que informan de cómo se producen las operaciones diarias, así como las distintas formas en las que tu público objetivo puede interactuar con su empresa. Al contar con una estrategia bien definida y efectiva, los objetivos omnicanal de una marca pueden alcanzarse y ser más efectivos en su implementación.

Independientemente de si ya has desarrollado una estrategia onmicanal o de si te encuentras en medio del proceso de implementación de una de ellas y estás buscando mejoras, las siguientes ocho prácticas te ayudarán a evitar complicaciones innecesarias y alinear la estrategia con tus objetivos:

1.Enfatiza la fluidez y la consistencia

Harvad Business Review concluyó que el 73% de los participantes de su estudio utilizaron múltiples canales en el transcurso de la encuesta. En el diseño y actualización de tu presencia móvil, asegurarse de que los usuarios puedan cambiar fácilmente de dispositivo sin perder información importante evitará puntos de bloqueo durante el customer journey.

Garantizar la continuidad de los datos en el back-end ayudará a que estas transiciones sean posibles. Nadie quiere empezar su proceso de compra de nuevo al cambiar de dispositivo, o ser incapaz de encontrar lo que necesita al tener que afrontar cambios drásticos en los diseños presentados en las diferentes pantallas. La consistencia hará frente a la frustración que genera lidiar con este tipo de experiencias. Numerosas compañías tradicionales utilizan un único software que guarda y conserva los datos de los clientes, organizados para ser accesibles por departamentos. Sin embargo, esto puede impedir que otras áreas de la compañía utilicen estos datos para proveer información útil y crear así un perfil consistente del usuario. Al superar los silos entre departamentos, los datos pueden ser explotados y utilizados para y por distintos ámbitos de una compañía.

2. Proporciona actualizaciones en tiempo real

Con el objetivo de tener experiencias onmicanales reales, todo, desde estados de pedidos hasta el historial de búsquedas, tiene que estar actualizado en tiempo real; de esta forma lo que se hace en un sitio se verá reflejado en el resto. De acuerdo con un estudio de Accenture, el 71% de los clientes espera poder ver el inventario de una tienda por internet, y el 50% espera poder hacer pedidos online y recogerlos en la tienda física. Mejorar la continuidad y consistencia de los datos mediante la integración de la información online y el inventario en tienda (integrando así los medios físico y virtual) evitará frustraciones de clientes causadas durante su proceso de compra al descubrir que los servicios que desean no están disponibles.

3. Permite la conectividad IoT

El llamado Internet de las Cosas (IoT por sus siglas en inglés) y su rol e impacto tanto en la vida cotidiana de los usuarios como en los negocios continúa creciendo día a día. Y los estudios predicen que este tipo de conectividad no hará sino crecer en los próximos años, hasta alcanzar los 20,4 billones de dispositivos IoT conectados en 2020, según las predicciones de Gartner. Como tal, permitir que tu software, dispositivos y aplicaciones puedan comunicarse y conectarse con otros para acceder a los dispositivos que integran el Internet de las Cosas no solo permitirá a los clientes de hoy tener una experiencia omnicanal única, sino que permitirá que la integración futura con los llamados IoT se implemente sin problemas, teniendo en cuenta que su peso e importancia en la vida cotidiana de los usuarios continúa creciendo.

4. Considera cómo se utilizan sus canales

Si bien se debe prestar la debida atención a cada canal, tanto interno como externo, estos pueden usarse de diferentes maneras. Analiza el comportamiento del usuario para conocer cómo tu audiencia utiliza los distintos canales: móvil, aplicaciones, sesiones desde escritorio, redes sociales y demás. Otro tipo de analíticas más detalladas como son: el tiempo y frecuencia de uso, patrones de usuario, tasa de rebote, fuentes de referencia, perfiles de usuario y otras, te permitirá conocer los distintos tipos de audiencias que habitualmente navegan en tu presencia online; incluyendo los puntos decisivos a través del recorrido del usuario, que determinarán si el usuario continuará con el proceso de compra o lo abandonará para terminarlo en un competidor. Además, considera cómo se comunica internamente tu empresa y qué procesos pueden perfeccionarse para evitar redundancias y complicaciones.

5. La experiencia digital complementa a la personal

Las experiencias online no deben suplir a las experiencias personales, sino que las dos deben convivir para crear experiencias nuevas y únicas. Por ejemplo, la realidad aumentada permite a los clientes tener una experiencia virtual nueva y única, mostrando aspectos de una tienda que tendrían en persona. Al hacerlo, una marca puede abarcar todos los aspectos que incluye una experiencia omnicanal en lugar de renunciar por completo a las interacciones en persona a favor de una presencia exclusivamente online. Recientemente, Pottery Barn presentó una aplicación de realidad aumentada que permite a los clientes superponer los muebles en su espacio real a través de la pantalla de su smartphone para ver cómo la marca puede satisfacer sus necesidades reales. La combinación de estos tipos de experiencias ayudará a difuminar aún más las líneas entre los canales para una experiencia más consistente.

6. Mapea el customer journey de los usuarios

Necesitas conocer el patrón que los clientes adoptan desde su primera interacción con tu marca hasta el cierre de la venta. De esta forma te asegurarás de que los usuarios pueden ir de un punto a otro sin problemas. Ten en cuenta que hay muchos comportamientos posibles, por lo que deberás asegurarte de que se consideren muchos puntos de entrada y rutas diferentes. El customer journey de los clientes de hoy incluyen: búsquedas móviles, compras desde sesiones de escritorio, servicio al cliente en persona, descargas de aplicaciones y más, que se unen para ayudar a los clientes a encontrar lo que desean. Un mapa de usuario exitoso muestra muchos puntos de entrada e intereses diferentes.

7. Aprovecha los servicios de mensajería para satisfacer las necesidades de tu audiencia

Más personas que nunca utilizan los servicios de mensajería instantánea para comunicarse con las empresas, como puede ser el servicio de Facebook Messenger. En lugar de llamar al departamento de atención al cliente o de recibir respuesta a sus preguntas en persona, las audiencias están utilizando aplicaciones de mensajería para comunicarse directamente con los representantes de las compañías a fin de resolver sus dudas e, incluso, solicitar productos o servicios. Se pueden utilizar distintos canales de mensajería, incluido el uso creciente de chatbots. Incorporar mensajes dentro de una estrategia omnicanal más amplia proporciona otro canal de comunicación y evita que las audiencias sientan que a su experiencia general con una empresa le falta algún elemento.

8. Potencia tu presencia en persona

Muchas marcas que afrontan la omnicanalidad solo se centran en sus experiencias online, pero si tiene ubicaciones físicas, éstas pueden jugar un papel importante en la estrategia omnicanal. Como muchas de las ventas se cierran en persona, los empleados necesitarán estar equipados con dispositivos que puedan ver el historial, los intereses y las razones de un cliente para visitar un lugar con el fin de satisfacer sus necesidades rápidamente. Hacerlo hará que las interacciones en persona sean un aspecto de valor en el proceso de venta de su empresa, en lugar de un punto débil que pueda pasarle factura.

Implementar tu estrategia omnicanal con éxito

Si bien se necesitarán muchas acciones específicas para alcanzar tus necesidades y objetivos, la creación de una planificación realista, asignar un presupuesto idóneo y la definición de las funciones de los empleados para determinar cómo implementar mejor estas ocho prácticas en las necesidades de tu marca te ayudarán a iniciarte en esta estrategia consistente. Al hacerlo, tu empresa estará mejor equipada para mejorar tu experiencia omnicanal y para mantenerte dentro del objetivo en estrategias tanto a corto como a largo plazo.

Potencia tu aplicación móvil

Siempre hay más que aprender y mejorar cuando se trata de adaptar una estrategia omnicanal a los objetivos de tu compañía.

Conoce cómo la Plataforma Móvil de Liferay puede ayudarte   Marta Dueñas González 2017-11-28T09:07:33Z
Categories: CMS, ECM

Big Data Course for KNIME Analytics Platform, Berlin - March 2018

Knime - Mon, 11/27/2017 - 10:20
Big Data Course for KNIME Analytics Platform, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 16:20 March 6, 2018 Berlin

KNIME is hosting two one day Big Data Courses for KNIME Analytics Platform during the KNIME Spring Summit in Berlin, Germany on March 6, 2018.

This one day Big Data Course for KNIME Analytics Platform focuses on processing and mining databases with Apache Hadoop and Apache Spark through KNIME. Learn how to interact with these tools based on an example: eliminating missing values by predicting their values based on other attributes.

Register Now

Course Content Tuesday, March 6, 2018

Course T3: KNIME Big Data Extensions and Data Mining with Apache Spark

  • In-database processing with the KNIME database extension
  • Pre-processing on Hadoop with Hive
  • Data processing and machine learning with Apache Spark
Other Courses

See KNIME Spring Summit event page for details.

The KNIME Team.

Categories: BI

Course for KNIME Analytics Platform - Advanced, Berlin - March 2018

Knime - Mon, 11/27/2017 - 10:20
Course for KNIME Analytics Platform - Advanced, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 16:20 March 6, 2018 Berlin

KNIME is hosting a one day Course for KNIME Analytics Platform - Advanced during the KNIME Spring Summit in Berlin, Germany on March 6, 2018.

Course for KNIME Analytics Platform is an ideal opportunity for beginners, advanced users and KNIME experts to be introduced to KNIME, to learn how to use it more effectively, and how to create clear, comprehensive reports based on KNIME workflows.

Register Now

Course Content Tuesday, March 6, 2018

Course T1: KNIME Analytics Platform - Advanced

  • Flow Variables
  • Time Series Analysis
  • Workflow Control
  • Advanced Data Mining
  • Model Selection
  • Open Session
Other Courses

See KNIME Spring Summit event page for details.

The KNIME Team.

Categories: BI

Text Mining Course for KNIME Analytics Platform, Berlin - March 2018

Knime - Mon, 11/27/2017 - 10:20
Text Mining Course for KNIME Analytics Platform, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 16:20 March 6, 2018 Berlin

KNIME is hosting a one day Text Mining Courses for KNIME Analytics Platform during the KNIME Spring Summit in Berlin, Germany on March 6, 2018.

This one day Text Mining Course for KNIME Analytics Platform is an intensive training focused on the processing and mining of textual data with KNIME using the Textprocessing extension. Learn how to read textual data in KNIME, enrich it semantically, preprocess, and transform it into numerical data, and finally cluster it, visualize it, or build predictive models. Text mining experience is not necessarily required for this training.

Register Now

Course Content Tuesday, March 6, 2018

Course T2: Text Mining with KNIME Analytics Platform

  • Introduction to KNIME
  • Reading and Importing Textual Data
  • Semantic Enrichment
  • Preprocessing and Transformation
  • Text Classification
  • Visualization
  • Text Clustering
  • Supplementary Workflows
Other Courses

See KNIME Spring Summit event page for details.

The KNIME Team.

Categories: BI

Advanced Analytics Methods Course with KNIME, Berlin - March 2018

Knime - Mon, 11/27/2017 - 08:20
Advanced Analytics Methods Course with KNIME, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 14:20 March 5, 2018 Berlin

KNIME is hosting a one day Advanced Analytics Methods with KNIME course during the KNIME Spring Summit in Berlin, Germany on March 5, 2018.

In this course you will get an introduction to the most important methods, algorithms and ideas in the data scientist’s toolbox. You will learn to use advanced methods to uncover underlying patterns or concepts contained in large datasets, automatically group objects into meaningful clusters, classify objects into predefined categories, and refine your features to enrich predictive modeling endeavors. This course covers the main algorithms of supervised and unsupervised learning. The course will cover modern thinking on model evaluation, model selection, and novel ideas of model deployment.

Who should attend: Data Scientists, statisticians, business analysts, market researchers, and information technology professionals who need to get started with Advanced Analytics and want to make better use of their data. Managers and Executives who want to embed analytics into their operating model and maximize data-driven business results at scale.

Register Now

Training Content Monday, March 5, 2018

Course M4: Advanced Analytics Methods with KNIME

  • Key Concepts of Advanced Analytics
  • Advanced Analytics Algorithms with KNIME Analytics Platform
  • Data Preparation & Data Exploration for Advanced Analytics
  • Supervised Learning
  • Unsupervised Learning
  • Model Assessment & Model Comparison
  • Model Deployment
Other Courses

See KNIME Spring Summit event page for details.

Categories: BI

IoT Analytics Course with KNIME, Berlin - March 2018

Knime - Mon, 11/27/2017 - 08:20
IoT Analytics Course with KNIME, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 14:20 March 6, 2018 Berlin

KNIME is hosting a one day IoT Analytics Course for KNIME Analytics Platform during the KNIME Spring Summit in Berlin, Germany on March 6, 2018.

Sensors, machine-to-machine, and network data will play a leading role in analytics as the Internet of Things becomes a reality. In this course you are introduced to the most important methodologies, algorithms and ideas in IoT Analytics. You will learn to access sensor-based data sources, understand the fundamentals of digital signal processing, realize the importance of smart feature engineering, and use advanced methodologies to uncover underlying patterns or concepts contained in IoT data streams. All exercises are based on real-world IoT datasets.

Who should attend: Data scientists, signal processing engineers, statisticians, mathematicians, computer scientist and information technology professionals who need to get started with IoT Analytics and want to make better use of their data.

Register Now

Course Content Tuesday, March 6, 2018

Course T4: IoT Analytics with KNIME

  • IoT Fundamentals
  • Sensor Data Acquisition
  • Introduction to Digital Signal Processing
  • Feature Engineering
  • Machine Learning Fundamentals
  • Model Training, Assessment, and Comparison
  • Model Deployment
  • IoT Analytics – Applications and Use Cases
Other Courses

The KNIME Team.

Categories: BI

Big Data Course for KNIME Analytics Platform, Berlin - March 2018

Knime - Mon, 11/27/2017 - 08:20
Big Data Course for KNIME Analytics Platform, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 14:20 March 5, 2018 Berlin

KNIME is hosting two one day Big Data Courses for KNIME Analytics Platform during the KNIME Spring Summit in Berlin, Germany on March 5, 2018.

This one day Big Data Course for KNIME Analytics Platform focuses on processing and mining databases with Apache Hadoop and Apache Spark through KNIME. Learn how to interact with these tools based on an example: eliminating missing values by predicting their values based on other attributes.

Register Now

Course Content Monday, March 5, 2018

Course M3: KNIME Big Data Extensions and Data Mining with Apache Spark

  • In-database processing with the KNIME database extension
  • Pre-processing on Hadoop with Hive
  • Data processing and machine learning with Apache Spark
Other Courses

See KNIME Spring Summit event page for details.

The KNIME Team.

Categories: BI

Text Mining Course for KNIME Analytics Platform, Berlin - March 2018

Knime - Mon, 11/27/2017 - 08:20
Text Mining Course for KNIME Analytics Platform, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 14:20 March 5, 2018 Berlin

KNIME is hosting a one day Text Mining Course for KNIME Analytics Platform during the KNIME Spring Summit in Berlin, Germany on March 5, 2018.

This one day Text Mining Course for KNIME Analytics Platform is an intensive training focused on the processing and mining of textual data with KNIME using the Textprocessing extension. Learn how to read textual data in KNIME, enrich it semantically, preprocess, and transform it into numerical data, and finally cluster it, visualize it, or build predictive models. Text mining experience is not necessarily required for this training.

Register Now

Course Content Monday, March 5, 2018

Course M2: Text Mining with KNIME Analytics Platform

  • Introduction to KNIME
  • Reading and Importing Textual Data
  • Semantic Enrichment
  • Preprocessing and Transformation
  • Text Classification
  • Visualization
  • Text Clustering
  • Supplementary Workflows
Other Courses

See KNIME Spring Summit event page for details.

The KNIME Team.

Categories: BI

Course for KNIME Analytics Platform - Beginners, Berlin - March 2018

Knime - Mon, 11/27/2017 - 08:20
Course for KNIME Analytics Platform - Beginners, Berlin - March 2018 oyasnev Mon, 11/27/2017 - 14:20 March 5, 2018 Berlin

KNIME is hosting a one day Course for KNIME Analytics Platform - Beginners during the KNIME Spring Summit in Berlin, Germany on March 5, 2018.

Course for KNIME Analytics Platform is an ideal opportunity for beginners, advanced users and KNIME experts to be introduced to KNIME, to learn how to use it more effectively, and how to create clear, comprehensive reports based on KNIME workflows.

Register Now

Course Content Monday, March 5, 2018

Course M1: KNIME Analytics Platform - Beginners

  • Introduction to KNIME
  • Importing Data
  • Data Manipulation & Data Aggregation
  • Data Visualization & Highlighting
  • Data Mining
  • Integrating External Tools
  • Data Export & Reporting
  • Summary & Catch-up
Other Courses

See KNIME Spring Summit event page for details.

The KNIME Team.

Categories: BI

Creating a Google Like Search Part V: Finale

Liferay - Mon, 11/27/2017 - 06:48
Previous parts of this series can be found here (part 1),  here (part 2), here (part 3) and here (part 4).

In the final part of this blog series few more interesting features are added to the previously created search portlet: possibility to use Liferay Audience Targeting to make segmented content more relevant, possibility to configure sort and facet fields (to any indexed fields) and fully configure search fields and their boosts. There’s also a possibility to make non-Liferay content findable through this search portlet.

There were also quite a few generic improvements I made along the way so in the end of the day, we have a custom Liferay search portlet with following features:

  • Google like appearance 
  • Completely ajaxed interface (no page transitions)
  • 3 selectable search result layouts (image card layout available for files / images)
  • Sortable search results (not available in default Liferay search)
  • Bookmarkable searches with short urls which can easily be collected to Google Analytics
  • Autocompletion & query suggestions
  • Automatic alternate search
  • Support for Boolean operators and Lucene syntax
  • Configurables:
    • Asset types to search for
    • Facets to retrieve
    • Sort fields
    • Fields to search for and their boosting
    • etc.
  • Audience targeting support to boost segment matching contents
  • Ability to include non-Liferay resources in the search results

I also added there few notes how to make this work on CE. Depending on the interest that could  be on the roadmap anyways. I also splitted the application in separate modules for a cleaner architecture. So for example, If you’d just like to use the backend and build your own UI that’s also possible now.

Screenshots Basic Functionality

Results Image Layout

Non-Liferay Assets in the  Search Results


Customizing the Liferay Elasticsearch Adapter

Why would you want to modify Liferay search adapter? Search adapter implementation in Liferay is responsible for implementing the portal search API for a specific search engine. It takes care for example about communication link between the portal and search engine, about implementation of index searchers and writers and about translating the portal search queries to native engine specific queries. Following diagram illustrates the layering roughly:


Following diagram on the other hand, illustrates the physical placement of search functionalities in bundles and modules:


In this project I had to customize the adapter to get a full support of the most versatile and powerful single query type, Elasticsearch QueryStringQuery.

Liferay search API evolves all the time, but at the moment, support for the QueryStringQuery is sparse. It allows only setting the query string but doesn’t let you to control any of the other, about 30 parameters. Using that query type you cannot thus control for example boosting or fields to target the query to, fuzziness etc.

So, for this purpose I did two things. First I created a new search query type QueryStringQuery extending Liferay’s standard query type StringQuery. This new query type is introduced in gsearch-query-api package. You can think that as an Liferay search API extension.

The other thing I did, was extending the Elasticsearch adapter. If creating a new query type was super easy, this was not. Sure you can write your own adapter but how about just extending a standard one even just a little? Extending search adapter is currently not flexible in a way it could be. When you take a look at the Elasticsearch adapter source code, especially the ElasticsearchQueryTranslator you see there service references to the invidual query translators. The first thought would be to use the extension points, to just create an alternative implementation of StringQuery translator with higher service ranking to replace the standard translator. That would be the way I would like to do that. That way, we wouldn’t have to modify the adapter at all and we would keep the maintainability of our portlet and portal as it is. That’s however not possible, at least at the moment, because of two reasons. First the  references in the ElasticsearchQueryTranslator are by default using STATIC and RELUCTANT injection.

The other problem is, that the Elasticsearch adapter module is only exposing the subpackage. So, if you need to reference any package inside the adapter in your custom service, you will get an import-package error at deploy time because those referenced packages are private. David Nebinger wrote an excellent writing of overcoming package access restrictions but actually there’s a third minor problem: that adapters dependencies are not all OSGI compliant but included in the module. So using for example Elasticsearch classes from you custom service, let’s say from a fragment module, leads currently to class loading problems. Hopefully these limitations will be taken care of in the future but at the moment, customization of the adapter source code seems not to be avoidable for extending practically anything of search adapters functionality.

You can see the details of my adapter customization implementation in Github. Basically I created a custom StringQuery translator implementation which, in case of our extended StringQuery type ”QueryStringQuery” will use a specialized translator for that an in case of the standard StringQuery falls to the default implementation.

As there will be an official Liferay support for Elasticsearch 6 in the near future, I decided not to upgrade the Elasticsearch adapter for 5.6.

Making Search More Intelligent

I was not completely satisfied with the relevancy of portal standard search and wanted to play with the idea of improving hits relevancy by means of making field options, like boosting, configurable and implementing some machine learning features.  I tried to make the API easily customizable so you can implement there your own features.

So one of the first thoughts was that it would be great to integrate Audience Targeting feature to search. That way you could boost contents segmented to your user segments but even better, you would have a dynamic way to control search hits relevancy.  As this is a DXP feature only you can enable and disable it in the configuration options. By default, it’s disabled.

How does it work? If current user is segmented to any user segment it adds a condition to the query giving the configure boost for any contents matching that segment. 

Making non Liferay Resources Findable

When starting this project, I planned to do some experiments on getting non Liferay assets to search results, in conjunction to search engine federation, but decided to leave federation out of the scope as it’s something that usually should be transparent to the client (portal in this case). I just mention here that both Elasticsearch and SOLR have means to make that possible.

How to make search to find things in the index that are not Liferay assets. For example, you might have several integration points in your portal, having resources that should be findable by Liferay. Usually the options are: make those resources portal assets so that they can be found by portal search or, create a dedicated, custom search for these external resources. Both of these solutions have lots of challenges or usability issues so how about getting everything on the same result list.

In this simple imaginary scenario external resources are being indexed to the portal search index.  So basically what you have to do, is to take care that these external documents just have the fields needed for our custom search to find them. You can find an example and more information on the GitHub page.

Just a reminder here that this solution only works with our custom portlet here, not with the standard Liferay portlet. Also, in a real world scenario indexing documents in the Liferay portal index would not be recommendable by any meansTo make our custom portlet to find resources in custom indexes, you would just need to customize the Elasticsearch search adapter a little bit further, mainly the ElasticsearchIndexSearcher class where the indexes to search for are being determined.

So that’s it for this blog series. For more information and details please see the project Github page.



Petteri Karttunen 2017-11-27T11:48:55Z
Categories: CMS, ECM

PHP 7.1.12 and 7.0.26 Released

CMS Magazine - Sun, 11/26/2017 - 16:07

The PHP development team have announced the immediate availability of PHP 7.1.12 and 7.0.26, two bugfix releases that all respective users encouraged to upgrade, including this official repository where all images has several critical vulnerabilities.

The 7.2 official release should be available in the next days (November 30th), after the latest release candidate which was announced earlier this month.

More information and download at

Categories: CMS

Custom Fields and Profiles: On-line Training November 29th, 9 am PT/ 10 am MT /12 pm ET

CiviCRM - Sun, 11/26/2017 - 11:25

Learn best practices for creating and adding custom fields to CiviCRM to store information unique to your organization in a systemized and functional way.  We will then cover the steps to gather this information directly from your constituents on-line through your website by adding a custom field to a profile and including the profile in an on-line page.  We'll also explore other uses for profiles in CiviCRM in this 2-hour training session. 

Categories: CRM

Liferay DXP and Analytics

Liferay - Fri, 11/24/2017 - 07:08

Liferay Integration with Analytics

1) With Google Analytics

Set up an account with Google Analytics:

This will create a Tracking Id


Configure the Tracking ID In liferay Configuration --> Site Settings --> Analytics

A Tracking Code will be generated which can then pasted in JavaScript of the Liferay Pages which needs the tracking. JavaScript code can be added under Configure section of Public or Private Page

Once down different analytics information start getting records on the Google Analytics 

Page Views Graph


Page View Statistics


2) With Piwik

For the Installation of Piwik ,

a) First Download XAMPP.  This bundle will have Apache , PHP , MySQL etc. Install the XAMPP

b) Also download piwik from .    

Once the XAMPP set up is completed and apache and mySQL Services are running.  Copy the unzipped piwik folder to the htdocs ( so evntually it will be \xampp\htdocs\piwik)

You can now browse piwik using http://ip/piwik and can start the installation.


During the process set the piwik admin user and complete the installation.


Piwik will generate a tracking code which will be used to be configured in Liferay


Paste the Tracking code under Configuration  --> Site Settings --> Analytics --> Piwik


Once done , browse different pages and you will notice analytics on piwik dashboard.

Dashboard 1


Dashboard 2

Sandeep Sapra 2017-11-24T12:08:25Z
Categories: CMS, ECM

Liferay Mobile – Creating Great Omnichannel Experiences - Part 2: Liferay Push

Liferay - Fri, 11/24/2017 - 05:22
This blog entry is part of a three-part blog series that explores some of the great features provided by Liferay Screens and Liferay Mobile SDK. In this this series, I will cover:
  • Liferay Screens – Use screenlets in your native Android or iOS app to leverage content and services available on your Liferay server
  • Liferay Push Notifications – Push offers and notifications directly to user handsets
  • Liferay Audience Targeting – Deliver targeted content and create personalised experiences within your native Android or iOS app
All examples in this series were developed for Android, using Android Developer Studio 3 and Liferay DXP.   Part 2 – Liferay Push

Liferay Push is a framework for sending push notifications from a Liferay server to Android and iOS mobile apps. It’s another great Liferay feature to add to your omnichannel armoury and delivers many great benefits, including:

  • Proactive engagement with your clients. Engagement through push notifications helps to provide up-date relevant information to clients, create a personalised experience, increase traffic and drive conversions.
  • Real-time communication. While emails might take time to be delivered or get lost in a spam folder, push messages reach clients instantly. Especially important for time-critical campaigns or offers!
  • Direct communication. A push notification is a message that is sent directly to a client’s mobile device. The message will pop up even if the device is locked or if the client is in a different app. 
  • Effective communication. Push notifications are a lot less messy then emails. They are simple, uncomplicated messages that provide only essential information, making them user friendly and effective.

In this article, we will take the Android app we developed in the first article and modify it so that it:

  • Registers with our Liferay server in order to receive notifications
  • Receives and parses incoming notification messages
  • Displays notifications in the Android notification drawer

Before we dive into this example, it’s important to briefly explain some of the fundamental mechanics behind it: 

1. Once a Liferay Push-enabled app has registered with a Liferay server, it can wait for notifications from the server. It can even be notified when it's not running - the system will wake it – so you can avoid polling the server for new messages every few seconds (and draining the device’s battery in the process).

2. Messages sent from Liferay to an Android handset are not transmitted directly, rather they go through an intermediary service – the Google Cloud Messaging (GCM) service. GCM is a free service that caters for downstream messages from servers to client apps, and upstream messages from client apps to servers.

3. When a Liferay Push-enabled mobile app registers with a Liferay server, it does so directly.

4. Currently, Liferay Push supports two push notification providers: Google and Apple.

In order to follow this example, you will need a Google account, a Google Firebase project and a Google Cloud Messaging (aka Firebase Cloud Messaging) server key.


Configure the Liferay Push plugin

1.       Install the Liferay Push plugin from the Market place

2.       Obtain a “Server key” from Google Cloud Messaging (Firebase > Project > Settings > Cloud Messaging > Server key).

(Take note also of your Sender ID - you will need this later in your app.)

3.       Configure the Liferay Push plugin. Go to Control Panel > System Settings > Other > Android Push Notifications Sender and input your Server key from step #2 into the API Key field.


Configure the mobile app

1.       Now it’s time to switch to our mobile app. The first step here is to add the Push gradle dependency.

compile ''


2.       Next, add the necessary app permissions to AndroidManifest.xml:

<uses-permission android:name="" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />


3.       Create PushReceiver  - a custom notifications receiver for our app.

package; import; public class PushReceiver extends PushNotificationsReceiver { @Override public String getServiceClassName() { return PushService.class.getName(); } }


4.       Add PushReceiver to AndroidManifest.xml:

<receiver android:name=".push.PushReceiver"> <intent-filter> <action android:name="" /> <category android:name="" /> </intent-filter> </receiver>


5.       Create PushService – this class will parse the incoming message and display a notification (with custom icon) in the notification drawer at the top of the device.

package; import; import; import android.content.Context; import; import; import; import android.util.Log; import; import; import org.json.JSONException; import org.json.JSONObject; public class PushService extends PushNotificationsService { @Override public void onPushNotification(JSONObject jsonObject) { super.onPushNotification(jsonObject); try { jsonObject = new JSONObject(jsonObject.getString("body")); String title = getString(jsonObject, "title"); String description = getString(jsonObject, "description"); createGlobalNotification(title, description); } catch (JSONException e) { Log.e("Error", e.getMessage()); } } private void createGlobalNotification(String title, String description) { Uri uri = RingtoneManager.getDefaultUri(RingtoneManager.TYPE_NOTIFICATION); NotificationCompat.Builder builder = new NotificationCompat.Builder(this).setContentTitle(title) .setContentText(description) .setAutoCancel(true) .setSound(uri) .setVibrate(new long[]{2000, 1000, 2000, 1000}) .setSmallIcon(R.drawable.liferay_glyph); Notification notification =; NotificationManager notificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); notificationManager.notify(1, notification); } private String getString(final JSONObject json, final String element) throws JSONException { return json.has(element) ? json.getString(element) : ""; } }


6.       Add the PushService service to AndroidManifest.xml: 

<service android:name=".push.PushService" />


7.       Add our Sender ID (from above) to strings.xml. Replace GCM_SENDER_ID with actual value.

<string name="push_sender_id">[GCM_SENDER_ID]</string>


8.       Change the MainActivity so that it extends PushScreensActivity and implements the necessary abstract methods by adding the following.

package; import android.os.Bundle; import; import; import; import; import org.json.JSONObject; public class MainActivity extends PushScreensActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override protected Session getDefaultSession() { return SessionContext.createSessionFromCurrentSession(); } @Override protected void onPushNotificationReceived(JSONObject jsonObject) { LiferayLogger.d("Received! " + jsonObject.toString()); } @Override protected void onErrorRegisteringPush(String message, Exception e) { LiferayLogger.e("Error registering!", e); } @Override protected String getSenderId() { return this.getString(R.string.push_sender_id); } }


Test registration and notifications

1.       OK, now it’s time to test our app and make sure that it registers the device with our Liferay portal upon successful login. Verify that there are currently no registered devices (Control Panel > Configuration > Push Notifications > Devices).

2.       Login to the mobile app. Verify that there is now a registered device with the Push plugin:

3.       Note that, if you delete a registered device on the Liferay server and try to re-register it by performing another app login, registration will not re-occur. This is by design - it is bad practice to perform registration every time you launch an app. If you inspect the code within PushScreensActivity you will see that you can force re-registration by either changing your app version (versionCode in build.gradle) or by running the app in debug mode:

if (!userAlreadyRegistered || appUpdated || BuildConfig.DEBUG) { push.onSuccess(this).onFailure(this).register(this, getSenderId()); } else { push.onPushNotification(this); }


4.       Now that our device is registered with the Liferay portal, it’s time to send a message to it! Go to the Test tab of the Liferay Push plugin and submit the following JSON payload:

5.       Click send and note that a notification is now present on the device.



Next steps

1.       The notifications in this example don’t actually do anything but, in a real-world scenario, you may want to include a call-to-action that takes the user somewhere, e.g., to a specific activity within your app.


2.       At this point, you might also want to develop a custom portlet to make it easy for someone from the marketing department to create notification messages and to target specific devices as part of a marketing campaign. This is easily achieved through Liferay’s Audience Targeting feature and through the Liferay API, but beyond the scope of this article.

In order to use the API to create and send the notifications themselves, the following code will get you started:

import com.liferay.push.notifications.model.PushNotificationsDevice; import com.liferay.push.notifications.service.PushNotificationsDeviceLocalService; import com.liferay.push.notifications.service.PushNotificationsDeviceLocalServiceUtil; import com.liferay.portal.kernel.json.JSONFactoryUtil; import com.liferay.portal.kernel.json.JSONObject; /* ......... */ PushNotificationsDeviceLocalService pushService = PushNotificationsDeviceLocalServiceUtil.getService(); /* ......... */ StringBuilder payloadString = new StringBuilder(); payloadString.append("{\"body\":\"{\\\"title\\\":\\\"").append(msgTitle); payloadString.append("\\\", \\\"description\\\":\\\"").append(msgBody); payloadString.append("\\\"}\"}"); JSONObject payload = JSONFactoryUtil.createJSONObject(payloadString.toString()); pushService.sendPushNotification(deliveryDeviceUserIds, payload);



Liferay provides the tooling you need to send push notifications from your Liferay server to your users’ Android and iOS apps. With a little configuration - mostly on the mobile app side of things - this convenient framework helps you to create great, immersive digital experiences.

The source code for the Android app used in this example can be found here:


Feel free to clone or fork the code - and have fun playing with Liferay push notifications!


John Feeney 2017-11-24T10:22:15Z
Categories: CMS, ECM
Syndicate content