AdoptOS

Assistance with Open Source adoption

ECM

Exiting the JVM ... !@#$%

Liferay - Wed, 12/12/2018 - 14:54

Today I experienced a great victory, but the day sure didn't start that way. In the days leading up to OSGI I spent a lot of time reading about it, listening to discussions, etc. I wanted to understand what the big deal was. Among the list of benefits, one stuck out most for me.

"OSGI does the dependency management, at runtime, for you. So if you have Module A that is dependent on Module B, and all of the sudden Module B vanishes, OSGI will make sure your environment remains stable by stopping any other modules (ie. Module A) that are dependent. This way, your user is not able to start interacting with A at the risk of 500 server app errors or worse."

WOW. I mean, I don't often experience this problem, but I like the idea of not having to worry about it! So when I started my journey with Liferay 7, I was keen to see this in action. I follow the rules. I'm one of those people. But from the beginning I would have some odd situations where if I removed the wrong module I would get the usual "Import-Package" error message etc, but I would also see this.

[fileinstall-/home/aj/projects/ml/liferay/osgi/modules][org_apache_felix_fileinstall:112] Exiting the JVM

I am on Linux. The process was still running, but the portal was not responsive. So the only way to fix this was to

  1. kill the process
  2. remove the module that caused the error (Module A)
  3. clear the osgi/state older
  4. start the server
  5. and redeploy in the right order

Ugh. And what a colossal waste of time. Over the past couple years I would ask people here and there about this error, maybe vent some frustrations, but everyone said the same thing -- "I've never seen that. I don't know what you are talking about". I figured -- pfft, mac users :)

But today, when it happened again I said to myself, there must be a reason -- so I started to search. As part of my search I took to slack. Piotr Paradiuk (who I had recently met in Amsterdam at DEVCON) was kind enough to rubber duck with me. If you don't know what Rubber Ducking is -- https://en.wikipedia.org/wiki/Rubber_duck_debugging

Piotr (and David actually, in another chat medium) both had the same reply "never seen it before". But over the course of try this and do that, Piotr sent me a link to github (https://github.com/liferay/liferay-portal/blob/65059440dfaf2b8b365a20f99e83e4cdb15478aa/modules/core/portal-equinox-log-bridge/src/main/java/com/liferay/portal/equinox/log/bridge/internal/PortalSynchronousLogListener.java#L106) -- mocking my error message :). But. after staring at this file, muttering under my breath, something caught my eye.

if ((throwable != null) && (throwable instanceof BundleException) && (_JENKINS_HOME != null)) { String throwableMessage = throwable.getMessage(); if (throwableMessage.startsWith("Could not resolve module")) { log.error(_FORMAT, "Exiting the JVM", context, throwable); System.exit(1); } }

More specifically --

private static final String _JENKINS_HOME = System.getenv("JENKINS_HOME");

:| ... I have jenkins running on my machine. Could that be it? So I stopped my instance and ran --

unset JENKINS_HOME

Restarted my portal and ran through the steps to try to reproduce. This time I get the Import-Package error stuff, but the JVM doesn't exit.

:|

So, the moral of the story. If you are like me and you want to have jenkins on your dev machine to play with and such -- virtualize it or run it in a docker container. Oh .. and add --

unset JENKINS_HOME

... to your LIFERAY_HOME/TOMCAT_HOME/bin/setenv.sh

Andrew Jardine 2018-12-12T19:54:00Z
Categories: CMS, ECM

Liferay Portal CE Clustering Returns

Liferay - Tue, 12/11/2018 - 16:35
Background

Last fall we reintroduced the option to use clustering in Liferay 7.0 by compiling a series of modules manually and including them in your project. We received a lot of feedback that this was a very cumbersome process and didn’t really provide the benefits we intended in bringing back clustering.  

Enable Clustering

Beginning with Liferay Portal 7.1 CE GA3, clustering can now be enabled just like previously in Liferay Portal 6.2 CE.  The official documentation covers the steps needed to enable clustering in GA3.  To enable clustering, set the property:

cluster.link.enabled=true

Successfully enabling clustering will result in the following message in the logs on startup:

------------------------------------------------------------------- GMS: address=oz-52865, cluster=liferay-channel-control, physical address=192.168.1.10:50643 ------------------------------------------------------------------- Other Steps We Are Taking for the Community

Restoring clustering is one of many steps we are taking to restore faith between Liferay and our community. Other initiatives to improve the developer experience with Liferay include: 

  • More GA releases of Liferay Portal CE to improve stability and quality
     
  • A single-core model for Liferay Commerce, making open source Liferay Commerce installations compatible with our subscription services
     
  • Better support for headless use of Liferay with front-end frameworks
     
  • Improved experience with documentation, including unifying all available documentation and best practices in a single place
     
  • Additional ways to use portions of Liferay Portal’s functionality (such as permissioning or forms) in non-Liferay applications (e.g. those built with SpringBoot)

We’ll be providing more information about each of these initiatives in the weeks to come. 

Conclusion

I want to thank you again for sticking with Liferay all these years as we’ve sorted out our business model (and sometimes made poor decisions along the way). I’m excited for the chance to invest in and grow our community in the coming months.  It is what sets us apart from the competition and is truly one of the most rewarding parts of what we do at Liferay.  We hope fully reintroducing clustering to be the first step towards achieving this goal and as always would love to hear what you think in the comments below.

Bryan H Cheung 2018-12-11T21:35:00Z
Categories: CMS, ECM

(Another) Liferay DEVCON 2018 Experience

Liferay - Tue, 12/11/2018 - 13:36

What a trip. From the dynamic format of the Unconference, to the Air Guitar competition, the opportunity to speak, and a lens into what's coming in 7.2, it was by far the best event that I have ever been to. I've been to many conference, but my first developer conference. It was so good that I forced myself to find the time to document the exprerience to share it with all of you. 

Day 1 - The Arrival

I arrived the day before to try to cope with the time change. If you arrive early morning and are trying to overcome jet lag with activities, don't start with the river cruise. David N. warned me, but I didn't listen. The warm cabin, the gentle rocking of the boat, the quiet -- all that was missing was a lullaby. David was right, I fell asleep for most of that ride. 

Day 2 - Unconference

I now understand why the Unconference has both limited seating and why it sells out so quickly. At first it feels a little bit like group therapy; everyone sitting in a circle, cues around the room for you to share and not be afraid to share. That's how I felt at least. But the dynamic agenda was inspiring. The participants decide the days topics. Anyone, ANYONE, can stand up, go to the mic, and propose a topic. The topics are then sorted and organized until you end up with an agenda. The discussions are open forums. You can share something cool you have done, ask a question, gripe about something that is bugging you, whatever you want. It's the perfect format for developers; after all we love to argue and we love to boast. Best of all, the leads from the various Liferay teams are present so it's a great opportunity to have your voice heard. For example, I proposed and led a discussion with a topic "Documentation: What is missing?". Cody Hoag, who is the lead for the docs team, was there and shared with us some of the challenges his team faces. He also took notes of the complaints, suggestions or ideas the participants had to digest and share with his team. For me the best part of the Unconference was the opportunity to connect not just with other community members but also the Liferay people from various teams. With only a little over 150 people in attendance it was a nice intimate atmosphere -- perfect for learning and sharing. As if the day and the sessions weren't good enough, the community event that was hosted by Liferay that evening was a blast. If you have not been to an unconference before, I would highly recommend you attend the next one! I certainly plan to. 

Day 3 - DEVCON: Day 1

If first impressions really are the most important thing, then the events team from Liferay nailed it. The venue for this year’s conference was a theatre. The main room (the Red Hall) was amazing and it was one of the first times I found myself at a conference sitting in a seat that I was sure didn't double as a torture rack. I know how silly that sounds, but when you spend most of your day sitting and listening to others talk, a comfortable chair is pretty important! 

The conference kicked off and Liferay didn't waste any time talking about 7.1. I could barely hold back a grin as Brian Chan mentioned BBSes and IRC (items I had included in my own talk that was yet to come). The keynotes and the sessions for the day were awesome -- though I made the same mistake I always make of attending a workshop, with no laptop. The worst part is the walk of shame 20 minutes in as you try to make a quiet (read: impossible) exit from the front of the room to the back. All the sessions I attended were very well done and I can't wait to see the recordings so that I can watch several sessions I was not able to attend. Session aside, there was a very rewarding personal moment for me -- receiving a 2018 Top Contributor award. It wasn't my first award, but who doesn't like to be recognized for the work they do? Actually, the best part of the awards piece was probably the guy to my right who was "picking it up for a friend" ... uh-huh :). 

The after party was chalk full of fun, fun that included and Air Guitar competition. What did I learn? Liferay doesn't just hire some exceptional developers, they also have some impressive air guitar skills! The combination of some awesome old school rock mixed with some all-in passionate head banging air guitar displays makes me think that there are several engineers and community members that might have had aspirations of a different career path in their youth. As if a great party and atmosphere wasn't enough, I also managed to meet Brian Chan (Liferay's Chief Technical Architect and Founder) -- the guy that started it all. He actually thanked me for the work I have done with Liferay. That was a little shocking actually. I give back to Liferay and the community because I feel Liferay and the community has done so much for me! As with the community event I also met some great people from all over Europe and beyond. I even met a Siberian who's name escapes me at the moment, but the shot of vodka he made me do in order to gain passage through the crowd to go home does not. 

Day 4 - DEVCON: Day 2

Day 2 for me was D-Day. My session was right before lunch. Having rehearsed more times than I could remember, most of my nervousness was gone.  My topic was: Journey of a Liferay Developer - The Search for Answers. The content was a community focused session where I decided to share with the group how I got involved with Liferay, the challenges I have faced along the way and how I overcame those challenges. I was a little uneasy since I wasn't really sure anyone would really care to hear my story, but when it was all over, people clapped, so it couldn't have been THAT bad :). All of this was validation that giving back, the hard work to prep, the courage to stand up and share, was worth it. It's something that I would gladly do again and something I would suggest everyone try at least once in their life. 

After my session I shook hands with Bryan Cheung (CEO of Liferay) and had a conversation with him for a few minutes around some of the pieces I highlighted in my talk. This is one of the many things I love about Liferay -- the fact that you can connect with people in their organization at all levels. You think Mark Hurd or Larry Ellison would ever take the time to sit down with me? Maybe, but I doubt it. 

Just like Day 1, the sessions were once again amazing. I learned about tools that were coming to improve developers experiences, I learned about integrations between Liferay and other platforms like Hystrix, and I watched some of my friends presentations discussing their challenges, including how to navigate the rough waters of an upgrade. The day ended with an exciting roadmap session where Jorge highlighted some of the things that are coming in 7.2 which, for me, was really exciting to see some of the potential features coming down the line.

Apart from all this though, I was amazed at how, in just a couple of days, I had covered material that would have otherwise likely taken me weeks or possibly months to learn. With all this information I was anxious to get home to both improve my skills, but also to share my experiences with others (developers and clients) so that we can all get the most out of our Liferay investments.  

Day 5 - Home Again

On your way out, before you check that bag? make sure you have your headphones -- I didn't. The silver lining though was that it gave me time to reflect.  9 hours to sit and reflect to be more precise (that's A LOT of reflection).  I found myself dreaming of the next conference that I might attend and topics I might propose with the hope that Liferay would once again give me the opportunity to share. My goal now? Continue, catching some of the content I missed during the conference (https://www.liferay.com/web/events-devcon), connecting with as many people in the community as I can and to make my way to Boston in 2019.

Hopefully this year goes well enough that I'll find myself back in Europe again next fall -- the question is, where? And will I see you there?

Andrew Jardine 2018-12-11T18:36:00Z
Categories: CMS, ECM

Unconference 2018

Liferay - Wed, 12/05/2018 - 09:50

//La versión española del artículo se puede encontrar aquí:  Unconference 2018.

Unconference took place on November 6, in Pakhuis de Zwijger, previously at Liferay DevCon in Amsterdam. I've read about this sessions, but I've never taken part in one. Spoiler, I loved it

If you have ever been taking part of one, you'll know Unconference agenda doesn't exist before it starts. If you haven't, I'm going to talk about Liferay Unconference 2018 to explain how it was.

First of all, Olaf talked about different spaces (talks zones, lunch zone, toilets, etc.), lunchtime, organization, and, very important, about 4 principles an 1 law.

4 principles:

  • Whoever comes is the right people (every participant wants to be there)
  • Whatever happens is the only thing that could have (the important thing is what it's happening in that place and in that moment )
  • Whenever it starts is the right time (you don't have to wait for anything to start the session)
  • When it’s (not) ever it’s (not) over (you make the most of the session)

1 law:

  • Law of Two Feet: If at any time you are in a situation where you are neither learning nor contributing, then you can use your two feet to go to a more productive place. This law is important to understand that every participant is present voluntarily.

I would like to remember a sentence that is important to understand Unconference mindset "If nobody attends your session, you have two options. In one hand, you can look for another interesting session,  in the other hand, you can work in your topic because when you propose a session, you are booking one hour to work in that topic".

At that moment we started to brainstorm and build out the day's agenda. We then write down topics on cards and choose the track.

Finally, these session topics are coordinated into available time slots and organized to avoid topic repetition and allow attendees to join the sessions they believe are right for them. It's important to know, only who has proposed the sessions, can change the card to another slot. So, if you want to change it (because there is another session at the same time), you must talk to her/him.

 

10

11

12

13

14

15

16

AStaging 2.0 - change listLiferay in action at the university

 

Rest/HATEOAS vs.GraphQLSennaJS/Module loader with other JS libsLiferay AnalyticsWhat extension point are you missing? BAPIO?Liferay performance with more than 1 million users Personalization - use cases & moreDDM, Forms & WorkflowSearch optimisation: boosting & filteringDXP Cloud C   How do you monitor your Liferay application & pluginsUsing multitenancy (more instances in one installtion) - a good idea?mongoDB & LiferayAudience Targeting to guest user: how to do it? D"Internal classes" - Why are some classes in the "internal" package not exported? How to customize?Container + Liferay: How to deploy or upgrade? Liferay intelliJ Plugin - Language Server ProtocolAdministration experiences GDPR EMigration experiences: LR 6.2 --> 7.0Liferay as a headless CMS - best practices Liferay + TensorFlowData integration, ETL/ESB | Experiences & methods integrating external systemsMobile with LiferayWorkspace: tips & tricks | How you use/extend workspace? Use plugins? FLiferay CommerceConfig & content orchestration - site initializers Your experience with Liferay + SSO DXP vs. CE  GReact?  Media/Video - server user upload/integrationDocumentation: What is missing?Making it easier for business users to build sites (modern site building)Liferay for frontend developers

H

       

I share some notes of the sessions:

  • Migration experiences : LR 6.2 --> 7.0           
    • If you want to upgrade Liferay version and to migrate to open source base data, it's better to migrate base data version first.
  • Config & content orchestration - site initializers            
    • There are some implementations, one is created by Commerce team.
    • Liferay team is analyzing to include it to new versions.
  • Liferay + TensorFlow
    • It is a new functionality in 7.2. Now it's available in github.
    • This functionality is implemented for images but is available to assets.  
  • How do you monitor your Liferay application & plugins
    • Use jvm to check threads           
    • Java thread dump analyzer -> http://fastthread.io
  • Documentation: What is missing?
    • Slack vs forum – don't lose information 
  • DXP vs. CE   
    • We talked about the two versions (CE and DXP) and possibility we pay licenses for support and code of both versions are the same
  • Workspace: tips & tricks | How you use/extend workspace? Use plugins?          
    • In the upgrade process to 7 version, if you can't migrate all modules, they recommend migrating services.
    • compile only - instruction call to proof

I think Unconference is very useful for everyone. On one hand, Liferay staff can check their functionalities and get feedback, and on the other hand, we can talk about topics important to us with experts.

To finish Unconference, we met in the plenary room and summarized the event.

Some points are very interesting of unconference are:

  • Philosophy/Mentality 
  • Attitude (everyone wants to share, teach and learn)
  • You can propose topics easily
  • You have a lot of options to learn (too many experts). 

As I said at the beginning, Unconference has been a wonderful experience and I would like to come back again.

Álvaro Saugar 2018-12-05T14:50:00Z
Categories: CMS, ECM

Unconference 2018

Liferay - Wed, 12/05/2018 - 05:16

//The english version of this article can be found here:   Unconference 2018.

El 6 de Noviembre, día anterior a comenzar la DevCon 2018, tuvo lugar la Unconference de Liferay en Pakhuis de Zwijger. Había leído sobre este tipo de sesiones pero nunca había participado en ninguna. Spoiler, me encantó.

Si ya has participado en alguna, sabrás que en estas sesiones no se tiene una agenda previamente definida, si no que se define en la propia sesión y por los asistentes. Por si no, explicaré un poco cómo fue y así sabrás su funcionamiento.

Primero, Olaf nos hizo una introducción en la que nos habló de la distribución de las zonas, comida, organización y, por supuesto, los 4 principios y la ley.

Los 4 principios:

  • Cualquier persona que esté en la reunión es la persona adecuada (ya que está mostrando interés al asistir).
  • Cualquier cosa que esté sucediendo es la única cosa que podremos llegar a tener (hay que pensar en lo que está ocurriendo en ese momento y en ese lugar).
  • Sea cual sea el momento en que comience, es el momento correcto (hace referencia al comienzo de las sesiones, se pueden comenzar sin esperar a nada en concreto, como que estén todos, etc.).
  • Cuando se acabó, se acabó (También se suele poner con la forma negativa, Hasta que no se ha acabado, no ha terminado; así se indica que se aproveche el tiempo lo máximo posible).

La ley:

  • Ley de los dos pies: Si no aportas o no te aporta, puedes irte (si en algún momento sientes que no estás aprendiendo o contribuyendo en nada, usa tus dos pies, es decir, puedes irte a otra reunión. La idea es que nadie debería estar en una reunión que considera aburrida).

Una aclaración que se hizo, que me resultó muy importante respecto a la mentalidad que se respiraba, fue: “si propones una sesión y no va nadie, tienes dos opciones, buscar otra sesión que te interese o utilizar esa hora, que se ha reservado para el tema, y trabajar en ello”.

Una vez aclarados todos los términos se comenzó a crear la agenda con las las propuestas de los asistentes.

Y por último, antes de comenzar con las sesiones, se dedicó un tiempo para reorganizar la agenda. Nota importante, sólo quién ha propuesto la sesión, puede cambiarla, por lo que si se quiere cambiar alguna sesión, por ejemplo, porque coincide con otra, se tendrá que hablar con las personas que la han propuesto para ver si están de acuerdo.

 10111213141516 AStaging 2.0 - change listLiferay in action at the university # Rest/HATEOAS vs.GraphQLSennaJS/Module loader with other JS libsLiferay AnalyticsWhat extension point are you missing? BAPIO?Liferay performance with more than 1 million usersLPersonalization - use cases & moreDDM, Forms & WorkflowSearch optimisation: boosting & filteringDXP Cloud C  UHow do you monitor your Liferay application & pluginsUsing multitenancy (more instances in one installtion) - a good idea?mongoDB & LiferayAudience Targeting to guest user: how to do it? D"Internal classes" - Why are some classes in the "internal" package not exported? How to customize?Container + Liferay: How to deploy or upgrade?NLiferay intelliJ Plugin - Language Server ProtocolAdministration experiences GDPR EMigration experiences: LR 6.2 --> 7.0Liferay as a headless CMS - best practicesCLiferay + TensorFlowData integration, ETL/ESB | Experiences & methods integrating external systemsMobile with LiferayWorkspace: tips & tricks | How you use/extend workspace? Use plugins? FLiferay CommerceConfig & content orchestration - site initializersHYour experience with Liferay + SSO DXP vs. CE  GReact? #Media/Video - server user upload/integrationDocumentation: What is missing?Making it easier for business users to build sites (modern site building)Liferay for frontend developers H  #    

Entre las charlas a las que pude ir, algunas recomendaciones que se indicaron:

  • Experiencia en la migración : LR 6.2 --> 7.0           
    • Para migrar la base de datos a una open source, es mejor hacer la migración primero de la base de datos y después la versión de Liferay.
  • Orquestación de config & conten– inicializadores de sitios            
    • Hay varias propuestas, una de ellas creada por el equipo de Commerce
    • Se está analizando incluirlo en las nuevas versiones.
  • Liferay + TensorFlow
    • Estará disponible en la versión 7.2, aunque se puede descargar ya desde github
    • Estará a nivel de asset, e implementado para imágenes.  
  • Cómo monitorizar tus aplicaciones y plugines de Liferay.  
    • Ver los thread en jvm           
    • Para analizar los dump de threads, muy útil -> http://fastthread.io
  • Documentation: What is missing?
    • Slack vs fórum – cómo gestionar las preguntas y para no perder información.
  • DXP vs. CE   
    • Se habló, entre otro temas, sobre la gestión de las dos versiones y opciones, como dejar el código de ambas versiones fuese igual y pagar la licencia por ejemplo, por el soporte.
  • Workspace: tips & tricks | How you use/extend workspace? Use plugins?          
    • Se recomienda, al pasar a la 7, ppr lo menos pasar los servicios a módul           
    • compile only - instruction call to proof

Este tipo de sesiones resultan interesantes tanto para los que queremos aprender de Liferay como para la propia gente de Liferay, ya que, por ejemplo, varias de las charlas fueron propuestas por ellos para tener feedback sobre las funcionalidades hechas o algunas que se están realizando.

Como conclusión del Unconference, se hizo una sesión de cierre en la que se pudo aportar la experiencia de cada uno.

De las cosas que más me gustaron y creo que tienen más valor:

  • Filosofía/Mentalidad   (la idea en la que se basa este tipo de conferencias)
  • Actitud (todo el mundo va a aportar, compartir y aprender)
  • Puedes  proponer temas que te interesen
  • Facilidad para compartir y aprender  (incluso con expertos en el área).   

Como comentaba al principio, ha sido una experiencia muy recomendable y que espero poder repetir en el futuro.

 

Álvaro Saugar 2018-12-05T10:16:00Z
Categories: CMS, ECM

New learning material on Liferay University

Liferay - Tue, 12/04/2018 - 11:41

Have you checked out Liferay University by now? Or even better: Liferay Passport - the all-inclusive version of University? If you did, you might want to come back and check the new content. If you haven't... Why?

Since my last update, we've added two more free lessons and one full course

And just as before - there's more to come. I barely could write this announcement because I was busy recording more material to be released soon.

Learn as much as you can

The offer that you can't refuse: Liferay University Passport, our flat rate for each and every course on Liferay University, is available for an introductory price at almost 30% discount.  It includes personal access to all material on university for a full year - learn as much as you can. The offer will expire at the end of the year - so sign up quickly, or regret.

Prefer a live trainer in the room?

Of course, if you prefer to have a live trainer in the room: The regular trainings are still available, and are updated to  contain all of the courses that you find on Liferay University and Passport. And, this way (with a trainer, on- or offline) you can book courses for all of the previous versions of Liferay as well.

And, of course, the fine documentation is still available and updated to contain information about the new version already.

(Photo: CC by 2.0 Hamza Butt)

Olaf Kock 2018-12-04T16:41:00Z
Categories: CMS, ECM

Liferay Security Announcement: TLS v1.0

Liferay - Thu, 11/29/2018 - 01:00
Update: This has been moved to January 11, 2019. Reason for the changes

The vulnerabilities in TLS 1.0 (and SSL protocols) include POODLE and DROWN. Due to these security risks, Liferay decided to disable TLS 1.0, as many other companies have done.

Moving to TLS 1.1 and higher will allow users to keep communications between Liferay and Liferay.com secure.

What TLS version Liferay systems are going to support

We will support TLS 1.1 and above.

Affected Liferay Services and Websites

Liferay Portal CE and Liferay DXP Functionality

  • Marketplace

Liferay DXP Functionality

  • Licensing (via order id, EE only)

Liferay Websites

  • api.liferay.com

  • cdn.lfrs.sl

  • community.liferay.com

  • customer.liferay.com

  • demo.liferay.com

  • dev.liferay.com

  • downloads.liferay.com

  • forms.liferay.com

  • learn.liferay.com

  • liferay.com

  • liferay.com.br

  • liferay.com.cn

  • liferay.de

  • liferay.es

  • liferay.org

  • marketplace.liferay.com

  • mp.liferay.com

  • origin.lfrs.sl

  • partner.liferay.com

  • services.liferay.com

  • support.liferay.com

  • translate.liferay.com

  • www.liferay.com

  • releases.liferay.com (tentative)

  • repository.liferay.com (tentative)

Deployment Impact

There are Liferay Portal CE/EE and Liferay DXP functionalities and applications that make outbound connections to remote servers (including Liferay services and websites). Server administrators should review their deployment configurations and adjust them (if needed) to enable initiating secure connections using a higher TLS protocol version and to prevent falling back to TLS 1.0.

Mitigation Notes for Deployments Technical Information
  • On Java 8, the default client-side TLS version is TLS 1.2 (TLS 1.1 is also supported and enabled). Java 8 also introduced a new system property called jdk.tls.client.protocols to configure which protocols are enabled.

  • On Java 7, the default client-side TLS version is TLS 1.0, but TLS 1.1 and 1.2 are also supported, though must be enabled manually. As of Java 7u111, TLS 1.2 is also enabled by default, though this update is available for Oracle Subscribers only.

    • The system property, jdk.tls.client.protocols, is available as of Java 7u95 (for Oracle Subscribers only).

  • On Java 6, the default and only client-side TLS version is TLS 1.0. As of Java 6u111, TLS 1.1 is also supported, though this update is available for Oracle Subscribers only.

  • There is another Java system property available called https.protocols, which controls the protocol version used by Java clients in certain cases (see details on Oracle's blog: Diagnosing TLS, SSL, and HTTPS).

As a result of these, Liferay Portal CE and DXP deployments are affected differently.

Liferay Portal CE/DXP 7.0 and 7.1

Liferay Portal CE 7.0 and Liferay DXP 7.0 and above require Java 8, so these deployments have TLS 1.2 enabled by default and ensure that outbound connections can use higher secure protocol versions. To improve your server's security, Liferay recommends disabling TLS 1.0 for clients (outbound connections) using the system properties mentioned above.

Liferay Portal CE/EE 6.1 and 6.2

Liferay Portal 6.2 CE/EE and 6.1 EE GA3 versions support Java 8, which has TLS 1.2 enabled by default. Liferay Portal CE 6.1 does not support Java 8.   Liferay recommends disabling TLS 1.0 for clients (outbound connections) using the system properties mentioned above.

Liferay Portal EE 6.1 and Liferay Portal CE/EE 6.2 deployments running on Java 7 should consider moving to Java 8. Liferay Portal 6.1 CE deployments should consider upgrading to a newer version with Java 8 support.  There is a known issue that prevents enabling TLS 1.1/1.2 manually using the system properties mentioned earlier.

Note for Deployments - Inbound Traffic

Liferay also recommends that server administrators disable support for TLS 1.0 and enable higher TLS protocols for inbound traffic on all Liferay Portal CE/EE and Liferay DXP deployments. The actual settings to enable and configure TLS can vary on each deployment, so system administrators should consult with their Application Server documentation and apply the necessary changes.

Related Resources Jamie Sammons 2018-11-29T06:00:00Z
Categories: CMS, ECM

Overriding the Favicon in DXP

Liferay - Mon, 11/26/2018 - 13:33

So perusing the web, you'll notice there's a couple of resources out there explaining how to override the default favicon in Liferay. Of course, the standard way is to add it to /images folder of your theme and apply the theme to the site. As long as you got your path right and it's the default name, "favicon.ico," it should work just fine. 

While this satisfies most use cases, what if you have more strict business requirements? For example, "Company logo must be displayed at all times while on the site," which includes while navigating on the control panel which has it's own theme. You might be inclined then to look into the tomcat folder and simply replace the file directly for all the themes. While this would work, this is not very maintainable since you need to do this every time you deploy a new bundle (which can be very frequent depending on your build process) and for every instance of Liferay you have running.

With DXP, we've introduced another way to override certain JSPs through dynamic includes. Although not every JSP can be overridden this way, luckily for us, top_head.jsp (where the favicon is set) can be. There are a few things you need to do when creating your dynamic include for this. 

First thing is you're going to want to register which extension point you'll be using for the JSP:

/html/common/themes/top_head.jsp#post

You're going to use "post" and not "pre" because the latest favicon specified is what is going to be rendered on the page. In other words, by adding our favicon after the default one, ours will take precedence. 

Next you're going to need the line that replaces:

<link href="<%= themeDisplay.getPathThemeImages() %>/<%= PropsValues.THEME_SHORTCUT_ICON %>" rel="icon" />

For that, we need to specify the image we're going to be using, which can be placed in our module's resource folder: src/main/resources/META-INF/resources/images. 

Next, we need a way of accessing our module's resources, which can be done with its web context path. In your bnd.bnd add the following line with your desired path name. For mine, I'll just use /favicon-override:

Web-ContextPath: /favicon-override

With that in place, we should be able to build our url now. The finished url will look something like: http://localhost:8080/o/favicon-override/images/favicon.ico. The components include the portal url, /o (which is used to access an OSGi module's resources), the web context path, and the path to the resource.

Putting it all together, your class should look something like this:

public class TopHeadDynamicInclude implements DynamicInclude { @Override public void include( HttpServletRequest request, HttpServletResponse response, String key) throws IOException { PrintWriter printWriter = response.getWriter(); ThemeDisplay themeDisplay = (ThemeDisplay)request.getAttribute( WebKeys.THEME_DISPLAY); StringBundler url = new StringBundler(4); url.append(themeDisplay.getPortalURL()); url.append("/o"); url.append(_WEB_CONTEXT_PATH); url.append(_FAVICON_PATH); printWriter.println("<link href=\"" + url.toString() + "\" rel=\"icon\" />"); } @Override public void register(DynamicIncludeRegistry dynamicIncludeRegistry) { dynamicIncludeRegistry.register("/html/common/themes/top_head.jsp#post"); } private static final String _WEB_CONTEXT_PATH = "/favicon-override"; private static final String _FAVICON_PATH = "/images/favicon.ico"; }

That's all there is to it! You could also add additional println statements to cover cases for mobile devices. The beauty of this solution is that you have a deployable module that can handle the configuration as opposed to manually doing it yourself which is prone to error. Also, by using dynamic includes, it's more maintainable when you need to upgrade to the next version of Liferay.

Github Repository: https://github.com/JoshuaStClair/favicon-override

Joshua St. Clair 2018-11-26T18:33:00Z
Categories: CMS, ECM

How about leveraging Liferay Forms by adding your own form field?

Liferay - Mon, 11/26/2018 - 09:22

Well, if you're reading this post, I can say you're interested, and maybe anxious, to find out how to create your own form field and deploy it to Liferay Forms, am I right? Therefore, keep reading and see how easy is to complete this task.

The first step we need to do is to install blade-cli ( by the way, what a nice tool! It boosts a lot your Liferay development speed! ), then just type the following:

blade create -t form-field -v 7.1 [customFormFieldNameInCamelCase]

Nice! But, I would like to create a custom form field in Liferay DXP 7.0, is this possible?

For sure! Try the command below:

blade create -t form-field -v 7.0 [customFormFieldNameInCamelCase]

Since version 3.3.0.201811151753 of blade-cli, the developer can choose to name his/her form field module using hyphens as a word separator, like in custom-form-field, or keep using the Camel Case format. Just to let you know, Liferay's developers use to name their modules using hyphens as a word separator.  :)

That's all Folks! Have a nice customization experience!

Renato Rêgo 2018-11-26T14:22:00Z
Categories: CMS, ECM

Changing the Behavior of Scheduled Jobs

Liferay - Wed, 11/21/2018 - 22:27

Part of content targeting involves scheduled jobs that periodically sweep through several tables in order to remove older data. From a modeling perspective, this is as if content targeting were to make the assumption that all of those older events have a weight of zero, and therefore it does not need to store them or load them for modeling purposes.

If we wanted to evaluate whether this assumption is valid, we would ask questions like how much accuracy you lose by making that assumption. For example, is it similar to the small amount of accuracy you lose by identifying stop words for your content and removing them from a search index, or is it much more substantial? If you wanted to find out with an experiment, how would you design the A/B test to detect what you anticipate to be a very small effect size?

However, rather than look in detail at the assumption, today we're going to look at some problems with the assumption's implementation as a scheduled job.

Note: If you'd like to take a look at the proof of concept code being outlined in this post, you can check it out under example-content-targeting-override in my blogs code samples repository. The proof of concept has the following bundles:

  • com.liferay.portal.component.blacklist.internal.ComponentBlacklistConfiguration.config: a sample component blacklist configuration which disables the existing Liferay scheduled jobs for removing older data
  • override-scheduled-job: provides an interface ScheduledBulkOperation and a base class ScheduledBulkOperationMessageListener
  • override-scheduled-job-listener: a sample which consumes the configurations of the existing scheduled jobs to pass to ScheduledBulkOperationMessageListener
  • override-scheduled-job-dynamic-query: a sample implementation of ScheduledBulkOperation that provides the fix submitted for WCM-1490
  • override-scheduled-job-sql: a sample implementation of ScheduledBulkOperation that uses regular SQL to avoid one at a time deletes (assumes no model listeners exist on the audience targeting models)
  • override-scheduled-job-service-wrapper: a sample which consumes the ScheduledBulkOperation implementations in a service wrapper
Understanding the Problem

We have four OSGi components responsible for content targeting's scheduled jobs to remove older data.

  • com.liferay.content.targeting.analytics.internal.messaging.CheckEventsMessageListener
  • com.liferay.content.targeting.anonymous.users.internal.messaging.CheckAnonymousUsersMessageListener
  • com.liferay.content.targeting.internal.messaging.CheckAnonymousUserUserSegmentsMessageListener
  • com.liferay.content.targeting.rule.visited.internal.messaging.CheckEventsMessageListener

Each of the scheduled jobs makes a service call (which by default, encapsulates the operation in a single transaction) to a total of five service builder services that perform the work for those scheduled jobs. Each of these service calls is implemented as an ActionableDynamicQuery in order to perform the deletion.

  • com.liferay.content.targeting.analytics.service.AnalyticsEventLocalService
  • com.liferay.content.targeting.anonymous.users.service.AnonymousUserLocalService
  • com.liferay.content.targeting.service.AnonymousUserUserSegmentLocalService
  • com.liferay.content.targeting.rule.visited.service.ContentVisitedLocalService
  • com.liferay.content.targeting.rule.visited.service.PageVisitedLocalService

These service builder services ultimately delete older data from six tables.

  • CT_Analytics_AnalyticsEvent
  • CT_Analytics_AnalyticsReferrer
  • CT_AU_AnonymousUser
  • CT_AnonymousUserUserSegment
  • CT_Visited_ContentVisited
  • CT_Visited_PageVisited

If you have enough older data in any of these tables, the large transaction used for the mass deletion will eventually overwhelm the database transaction log and cause the transaction to be rolled back (in other words, no data will be deleted). Because the rollback occurs due to having too much data, and none of this data was successfully deleted, this rollback will repeat with every execution of the scheduled job, ultimately resulting in a very costly attempt to delete a lot of data, with no data ever being successfully deleted.

(Note: With WCM-1309, content targeting for Liferay 7.1 works around this problem by allowing the check to run more frequently, which theoretically prevents you from getting too much data in these tables, assuming you started using content targeting with Liferay 7.1 rather than in earlier releases.)

Implementing a Solution

When we convert our problem statement into something actionable, we can say that our goal is to update either the OSGi components or the service builder services (or both) so that the scheduled jobs which performs mass deletions do so across multiple smaller transactions. This will allow the transaction to succeed.

Step 0: Installing Dependencies

First, before we can even think about writing an implementation, we need to be able to compile that implementation. To do that, you'll need the API bundles (by convention, Liferay names these as .api bundles) for Audience Targeting.

compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.analytics.api" compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.anonymous.users.api" compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.api" compileOnly group: "com.liferay.content-targeting", name: "com.liferay.content.targeting.rule.visited.api"

With that in mind, our first road block becomes apparent when we check repository.liferay.com for our dependencies: one of the API bundles (com.liferay.content.targeting.rule.visited.api) is not available, because it's considered part of the enterprise release rather than the community release. To work around that problem, you will need to install all of the artifacts from the release .lpkg into a Maven repository and use those artifacts in your build scripts.

This isn't fundamentally difficult to do, as one of my previous blog posts on Using Private Module Binaries as Dependencies describes. However, because Liferay Audience Targeting currently lives outside of the main Liferay repository there are two wrinkles: (1) the modules in the Audience Targeting distribution don't provide the same hints in their manifests about whether they are available in a public repository or not, and (2) looking up the version each time is a pain.

To address both of these problems, I've augmented the script to ignore the manifest headers and to generate (and install) a Maven BOM from the .lpkg. You can find that augmented script here: lpkg2bom. After putting it in the same folder as the .lpkg, you run it as follows:

./lpkg2bom com.liferay.content-targeting "Liferay Audience Targeting.lpkg"

Assuming you're using the Target Platform Gradle Plugin, you'd then add this to the dependencies section in your parent build.gradle:

targetPlatformBoms group: "com.liferay.content-targeting", name: "liferay.audience.targeting.lpkg.bom", version: "2.1.2"

If you're using the Spring dependency management plugin, you'd add these to the imports section of the dependencyManagement section in your parent build.gradle.

mavenBom "com.liferay.content-targeting:liferay.audience.targeting.lpkg.bom:2.1.2"

(Note: Rumor has it that we plan to merge Audience Targeting into the main Liferay repository as part of Liferay 7.2, so it's possible that the marketplace compile time dependencies situation isn't going to be applicable to Audience Targeting in the future. It's still up in the air whether it gets merged into the main public repository or the main private repository, so it's also possible that compiling customizations to existing Liferay plugins will continue to be difficult in the future.)

Step 1: Managing Dependency Frameworks

Knowing that we are dealing with service builder services, your initial plan might be to override the specific methods invoked by the scheduled jobs, because traditional Liferay wisdom is that the services are easy to customize in Liferay.

  • com.liferay.content.targeting.analytics.service.AnalyticsEventLocalService
  • com.liferay.content.targeting.anonymous.users.service.AnonymousUserLocalService
  • com.liferay.content.targeting.service.AnonymousUserUserSegmentLocalService
  • com.liferay.content.targeting.rule.visited.service.ContentVisitedLocalService
  • com.liferay.content.targeting.rule.visited.service.PageVisitedLocalService

If you attempt this, you will be blindslided by a really difficult part of the Liferay DXP learning curve: the intermixing of multiple dependency management approaches (Spring, Apache Felix Dependency Manager, Declarative Services, etc.), and how that leads to race conditions when dealing with code that runs at component activation. More succinctly, you will end up needing to find a way to control which happens first: your new override of the service builder service being consumed by the OSGi component firing the scheduled job, or the scheduled job firing for the first time.

Rather than try to solve the problem, you can work around it by disabling the existing scheduled job via a component blacklist (relying on its status as a static bundle, which means it has a lower start level than standard modules), and then start a new scheduled job that consumes your custom implementation.

blacklistComponentNames=["com.liferay.content.targeting.analytics.internal.messaging.CheckEventsMessageListener","com.liferay.content.targeting.anonymous.users.internal.messaging.CheckAnonymousUsersMessageListener","com.liferay.content.targeting.internal.messaging.CheckAnonymousUserUserSegmentsMessageListener","com.liferay.content.targeting.rule.visited.internal.messaging.CheckEventsMessageListener"]

Let's take a moment to reflect on this solution design. Given that overriding the service builder service brings us back into a world where we're dealing with multiple dependency management frameworks, it makes more sense to separate the implementation from service builder entirely. Namely, we want to move from a world that's a mixture of Spring and OSGi into a world that is pure OSGi.

Step 2: Setting up the New Scheduled Jobs

Like all scheduled jobs, each of these scheduled jobs will register itself to the scheduler, asking the scheduler to call it at some frequency.

protected void registerScheduledJob(int interval, TimeUnit timeUnit) { SchedulerEntryImpl schedulerEntry = new SchedulerEntryImpl(); String className = getClassName(); schedulerEntry.setEventListenerClass(className); Trigger trigger = triggerFactory.createTrigger( className, className, null, null, interval, timeUnit); schedulerEntry.setTrigger(trigger); _log.fatal( String.format( "Registering scheduled job for %s with frequency %d %s", className, interval, timeUnit)); schedulerEngineHelper.register( this, schedulerEntry, DestinationNames.SCHEDULER_DISPATCH); }

If you're familiar only with older versions of Liferay, it's important to note that we don't control the frequency of scheduled jobs via portal properties, but rather with the same steps that are outlined in the tutorial, Making Your Applications Configurable.

In theory, this would make it easy for you to check configuration values; simply get an instance of the Configurable object, and away you go. However, in the case of Audience Targeting, Liferay chose to make the configuration class and the implementation class private to the module. This means that we'll need to parse the configuration directly from the properties rather than be able to use nice configuration objects, and we'll have to manually code-in the default value that's listed in the annotation for the configuration interface class.

protected void registerScheduledJob( Map<String, Object> properties, String intervalPropertyName, int defaultInterval, String timeUnitPropertyName) { int interval = GetterUtil.getInteger( properties.get(intervalPropertyName), defaultInterval); TimeUnit timeUnit = TimeUnit.DAY; String timeUnitString = GetterUtil.getString( properties.get(timeUnitPropertyName)); if (!timeUnitString.isEmpty()) { timeUnit = TimeUnit.valueOf(timeUnitString); } registerScheduledJob(interval, timeUnit); }

With that boilerplate code out of the way, we assume that our listener will be provided with an implementation of the bulk deletion for our model. For simplicity, we'll call this implementation a ScheduledBulkOperation, which has a method to perform the bulk operation, and a method that tells us how many entries it will attempt to delete at a time.

public interface ScheduledBulkOperation { public void execute() throws PortalException, SQLException; public int getBatchSize(); }

To differentiate between different model classes, we'll assume that the ScheduledBulkOperation has a property model.class that tells us which model it's intended to bulk delete. Then, each of the scheduled jobs asks for its specific ScheduledBulkOperation by specifying a target attribute on its @Reference annotation.

@Override @Reference( target = "(model.class=abc.def.XYZ)" ) protected void setScheduledBulkOperation(ScheduledBulkOperation ScheduledBulkOperation) { super.setScheduledBulkOperation(ScheduledBulkOperation); } Step 3: Breaking Up ActionableDynamicQuery

There are a handful of bulk updates in Liferay that don't actually need to be implemented as large transactions, and so as part of LPS-45839, we added an (undocumented) feature to allow you to break those a large transaction wrapped inside an ActionableDynamicQuery into multiple smaller transactions.

This was further simplified with the refactoring for LPS-46123, so that you could use a pre-defined constant in DefaultActionableDynamicQuery and one method call to get that behavior:

actionableDynamicQuery.setTransactionConfig( DefaultActionableDynamicQuery.REQUIRES_NEW_TRANSACTION_CONFIG);

You can probably guess that as a result of the feature being undocumented, when we implemented the fix for WCM-1388 to use an ActionableDynamicQuery to fix an OutOfMemoryError, we didn't make use of it. So even though we addressed the memory issue, if the transaction was large enough, the transaction was still doomed to be rolled back.

So now we'll look towards our first implementation of a ScheduledBulkOperation: simply taking the existing code that leverages an ActionableDynamicQuery, and make it use a new transaction for each interval of deletions.

For the most part, every implementation of our bulk deletion looks like the following, with a different service being called to get an ActionableDynamicQuery different name for the date column, and a different implementation of ActionableDynamicQuery.PerformAction for the individual delete methods.

ActionableDynamicQuery actionableDynamicQuery = xyzLocalService.getActionableDynamicQuery(); actionableDynamicQuery.setAddCriteriaMethod( (DynamicQuery dynamicQuery) -> { Property companyIdProperty = PropertyFactoryUtil.forName( "companyId"); Property createDateProperty = PropertyFactoryUtil.forName( dateColumnName); dynamicQuery.add(companyIdProperty.eq(companyId)); dynamicQuery.add(createDateProperty.lt(maxDate)); }); actionableDynamicQuery.setPerformActionMethod(xyzDeleteMethod); actionableDynamicQuery.setTransactionConfig( DefaultActionableDynamicQuery.REQUIRES_NEW_TRANSACTION_CONFIG); actionableDynamicQuery.setInterval(batchSize); actionableDynamicQuery.performActions();

With that base boilerplate code, we can implement several bulk deletions for each model that accounts for each of those differences.

@Component( properties = "model.class=abc.def.XYZ", service = ScheduledBulkOperation.class ) public class XYZScheduledBulkOperationByActionableDynamicQuery extends ScheduledBulkOperationByActionableDynamicQuery<XYZ> { } Step 4: Bypassing the Persistence Layer

If you've worked with Liferay service builder, you know that almost all non-upgrade code that lives in Liferay's code base operates on entities one at a time. Naturally, anything implemented with ActionableDynamicQuery has the same limitation.

This happens partly because there are no foreign keys (I don't know why this is the case either), partly because of an old incompatibility between Weblogic and Hibernate 3 (which was later addressed through a combination of LPS-29145 and LPS-41524, though the legacy setting lives on in hibernate.query.factory_class), and partly because we still notify model listeners one at a time.

In theory, you can set the value of the legacy property to org.hibernate.hql.ast.ASTQueryTranslatorFactory to allow for Hibernate queries with bulk updates (among a lot of other nice features that are available in Hibernate by default, but not available in Liferay due to the default value of the portal property), and then use that approach instead of an ActionableDynamicQuery. That's what we're hoping to eventually be able to do with LPS-86407.

However, if you know you don't have model listeners on the models you are working with (not always a safe assumption), a new option becomes available. You can choose to write everything with plain SQL outside of the persistence layer and not have to pay the Hibernate ORM cost, because nothing needs to know about the model.

This brings us to our second implementation of a ScheduledBulkOperation: using plain SQL.

With the exception of the deletions for CT_Analytics_AnalyticsReferrer (which is effectively a cascade delete, emulated with code), the mass deletion of each of the other five tables can be thought of as having the following form:

DELETE FROM CT_TableName WHERE companyId = ? AND dateColumnName < ?

Whether you delete in large batches or you delete in small batches, the query is the same. So let's assume that something provides us with a map where the key is a companyId, and the value is a sorted set of the timestamps you will use for the deletions, where the timestamps are pre-divided into the needed batch size.

String deleteSQL = String.format( "DELETE FROM %s WHERE companyId = ? AND %s < ?", getTableName(), getDateColumnName()); try (Connection connection = dataSource.getConnection(); PreparedStatement ps = connection.prepareStatement(deleteSQL)) { connection.setAutoCommit(true); for (Map.Entry<Long, SortedSet<Timestamp>> entry : timestampsMap.entrySet()) { long companyId = entry.getKey(); SortedSet<Timestamp> timestamps = entry.getValue(); ps.setLong(1, companyId); for (Timestamp timestamp : timestamps) { ps.setTimestamp(2, timestamp); ps.executeUpdate(); clearCache(getTableName()); } } }

So all that's left is to identify the breakpoints. In order to delete in small batches, choose the number of records that you want to delete in each batch (for example, 10000). Then, assuming you're running on a database other than MySQL 5.x, you can fetch the different breakpoints for those batches, though the modulus function will vary from database to database.

SELECT companyId, dateColumnName FROM ( SELECT companyId, dateColumnName, ROW_NUMBER() OVER (PARTITION BY companyId ORDER BY dateColumnName) AS rowNumber FROM CT_TableName WHERE dateColumnName < ? ) WHERE MOD(rowNumber, 10000) = 0 ORDER BY companyId, dateColumnName

If you're running a database like MySQL 5.x, you will need something similar to a stored procedure, or you can pull back all the companyId, dateColumnName records and discard anything that isn't located at a breakpoint. It's wasteful, but it's not that bad.

Finally, you sequentially execute the mass delete query for each of the different breakpoint values (and treat the original value as one extra breakpoint) rather than just the final value by itself. With that, you've effectively broken up one transaction into multiple transactions, and it will happen as fast as the database can manage, without having to pay the ORM penalty.

Expanding the Solution

Now suppose you encounter the argument, "What happens if someone manually calls the method outside of the scheduled in order to clean up the older data?" At that point, overriding the sounds looks like a good idea.

Since we already have a ScheduledBulkOperation implementation, and because service wrappers are implemented as OSGi components, the implementation is trivial.

@Component(service = ServiceWrapper.class) public class CustomXYZEventLocalService extends XYZLocalServiceWrapper { public CustomXYZEventLocalService() { super(null); } @Override public void checkXYZ() throws PortalException { try { _scheduledBulkOperation.execute(); } catch (SQLException sqle) { throw new PortalException(sqle); } } @Reference( target = "(model.class=abc.def.XYZ)" ) private ScheduledBulkOperation _scheduledBulkOperation; } Over-Expanding the Solution

With the code now existing in a service override, we have the following question: should we move the logic to whatever we use to override the service and then have the scheduled job consume the service rather than this extra ScheduledBulkOperation implementation? And if so, should we just leave the original scheduled job enabled?

With the above solution already put together, it's not obvious why you would ask that question. After all, if you have the choice to not mix Spring and OSGi, why are you choosing to mix them together again?

However, if you didn't declare the scheduled bulk update operation as its own component, and you had originally just embedded the logic inside of the listener, this question is perfectly natural to ask when you're refactoring for code reuse. Do you move the code to the service builder override, or do you create something else that both the scheduled job and the service consume? And it's not entirely obvious that you should almost never attempt the former.

Evaluating Service Wrappers

In order to know whether it's possible to consume our new service builder override from a scheduled job, you'll need to know the order of events for how a service wrapper is registered:

  1. OSGi picks up your component, which declares that it provides the ServiceWrapper.class service
  2. The ServiceTracker within ServiceWrapperRegistry is notified that your component exists
  3. The ServiceTracker creates a ServiceBag, passing your service wrapper as an argument
  4. The ServiceBag injects your service wrapper implementation into the Spring proxy object

Notice that when you follow the service wrapper tutorial, your service is not registered to OSGi under the interface it implements, because Liferay relies on the Spring proxy (not the original implementation) being published as an OSGi component. This is deliberate, because Liferay hasn't yet implemented a way to proxy OSGi components (though rumor has it that this is being planned for Liferay 7.2), and without that, you lose all of the benefits of the advices that are attached to services.

However, as a side-effect of this, this means that even though no components are notified of a new implementation of the service, all components are transparently updated to use your new service wrapper once the switch completes. What about your scheduled job? Well, until your service wrapper is injected into the Spring proxy, your scheduled job will still be calling the original service. In other words, we're back to having a race condition between all of the dependency management frameworks.

In order to fight against that race condition, you might consider manually registering the scheduled job after a delay, or duplicating the logic that exists in ServiceWrapperRegistry and ServiceBag and polling the proxy to find out when your service wrapper registered. However, all of that is really just hiding dependencies.

Evaluating Bundle Fragments

If you were still convinced that you could override the service and have your scheduled job invoke the service, you might consider overriding the existing service builder bean using a fragment bundle and ext-spring.xml, as described in a past blog entry by David Nebinger on OSGi Fragment Bundles.

However, there are two key limitations of this approach.

  1. You need a separate bundle fragment for each of the four bundles.
  2. A bundle fragment can't register new OSGi components through Declarative Services.

The second limitation warrants additional discussion, because it's also another part of the OSGi learning curve. Namely, code that would work perfectly in a regular bundle will stop working if you move it to a bundle fragment, because a bundle fragment is never started (it only advances to the RESOLVED state).

Since it's a service builder plugin, one workaround for DXP is to use a Spring bean, where the Spring bean will get registered to OSGi automatically later in the initialization cycle. However, choosing this strategy means you shouldn't add @Component to your scheduled job class (otherwise, it gets instantiated by both Spring and OSGi, and that will get messy), and there are a few things you need to keep in mind as you're trying to manage the fact that you're playing with two dependency management frameworks.

  • In order to get references to other Spring-managed components within the same bundle (for example, the service builder service you overrode), you should do it with ext-spring
  • In order to get references to Spring beans, you should use @BeanReference
  • In order to get references to OSGi components, you need to either (a) use static fields and ServiceProxyFactory, as briefly mentioned in the tutorial on Detecting Unresolved OSGi Components, or (b) use the Liferay registry API exported to the global classloader, as mentioned in the tutorial on Using OSGi Services from EXT Plugins
Evaluating Marketplace Overrides

Of course, if you were to inject the new service using ext-spring.xml using a marketplace override, as described in a past blog entry by David Nebinger on Extending Liferay OSGi Modules, you're able to register new components just fine.

However, there are still three key limitations of this approach.

  1. You need a separate bundle for each of the four marketplace overrides.
  2. Each code change requires a full server restart and a clean osgi/state folder.
  3. You need to be fully aware that the increased flexibility of a marketplace override is similar to the increased flexibility of an EXT plugin.

In theory, the reason the second limitation exists is because marketplace overrides are scanned by the same code that scans .lpkg folders rather than through a regular bundle scanning mechanism, and that scan happens only once and only happens during the module framework initialization. In theory, you might be able to work around it by adding the osgi/marketplace/override folder to the module.framework.auto.deploy.dirs portal property. However, I don't know how this actually works in practice, because I've quietly accepted the documentation that says that restarts are necessary.

Reviewing the Solution

To summarize, overriding Liferay scheduled jobs is fairly straightforward once you have all of the dependencies you need, assuming you're willing to accept the following two steps:

  1. Disable the existing scheduled job
  2. Create a new implementation of the work that scheduled job performs

If you reject these steps and try to play at the boundary of where different dependency management frameworks interact, you need to deal with the race conditions and complications that arise from that interaction.

Minhchau Dang 2018-11-22T03:27:00Z
Categories: CMS, ECM

Changing OSGi References

Liferay - Fri, 11/16/2018 - 12:53

So we've all seen those @Reference annotations scattered throughout the Liferay code, and it can almost seem like those references are not changeable.

In fact, this is not really true at all.

The OSGi Configuration Admin service can be used to change the reference binding without touching the code.

Let's take a look at a contrived example from Liferay's com.liferay.blogs.demo.internal.BlogsDemo class.

This class has a number of @Reference injections for different types of demo content generators. One of those is declared as:

@Reference(target = "(source=lorem-ipsum)", unbind = "-") protected void setLoremIpsumBlogsEntryDemoDataCreator( BlogsEntryDemoDataCreator blogsEntryDemoDataCreator) { _blogsEntryDemoDataCreators.add(blogsEntryDemoDataCreator); }

So in this example, the com.liferay.blogs.demo.data.creator.internal.LoremIpsumBlogsEntryDemoDataCreatorImpl is registered with a property, "source=lorem-ipsum", and it can generate content for a blogs entry demo.

Let's say that we have our own demo data creator, com.example.KlingonBlogsEntryDemoDataCreatorImpl that generates blog entries in Klingon (it has "source=klingon" defined for its property), and we want the blogs demo class to not use the lorem-ipsum version, but instead use our klingon variety.

How can we do this?

Well, BlogsDemo is a component, so we could create a copy of it and change the relevant code to @Reference ours, but this seems kind of like overkill.

A much easier way would be to get OSGi to just bind to our instance rather than the original. This is actually quite easy to do.

First we will need to create a configuration admin override file in osgi/config named after the full class name but with a .config extension.

So we need to create an osgi/config/com.liferay.blogs.demo.internal.BlogsDemo.config file. This file will have our override for the reference to bind to, but we need to get some more details for that.

We need to know the name for the field that we're going to be setting, that will be part of the configuration change. This will actually come from what the @Reference decorates. If @Reference is on a field, the field name will be the name you need; if it is on a setter, the name will be the setter method name without the leading "set" prefix.

So, from above, since we have setLoremIpsumBlogsEntryDemoDataCreator(), our field name will be "LoremIpsumBlogsEntryDemoDataCreator".

To change the target, we'll need to add a line to our config file with the following:

LoremIpsumBlogsEntryDemoDataCreator.target="(source\=klingon)"

This will effectively change the target string from the old (source=lorem-ipsum) to the new (source=klingon).

So this is how we can basically change up the wiring w/o really overriding a line of code.

You can even take this further. With a simple @Reference annotation w/o a target filter, you can add a target filter to bind a different reference. This could be an alternative to relying on a higher service ranking for binding.

For those cases where a service tracker is being used to track a list of entities, you can use this technique to exclude one or more references that you don't want to have the service tracker capture.

So actually I didn't come up with all of this myself.  It's actually an adaptation of https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/overriding-service-references#configure-the-component-to-use-the-custom-service to demonstrate just how that can be used to change the wiring.

David H Nebinger 2018-11-16T17:53:00Z
Categories: CMS, ECM

Accessing Services in JSPs

Liferay - Fri, 11/16/2018 - 10:21
Introduction

When developing JSP-based portlets for OSGi deployment, and even when doing JSP fragment bundle overrides, it is often necessary to get service references in the JSP pages. But OSGi @Reference won't work in the JSP files, so we need ways to expose the services so they can be accessed in the JSPs...

Retrieving Services in the JSP

So we're going to work this a little backwards, we're going to cover how to get the service reference in the JSP itself.

In order to get the references, we're going to use a scriptlet to pull the reference from the request attributes, similar to :

<% TrashHelper trashHelper = (TrashHelper) request.getAttribute(TrashHelper.class.getName()); %>

The idea is that we will be pulling the reference directly out of the request attributes. We need to cast the object coming from the attributes to the right type, and we'll be following the Liferay standard of using the full class name as the attribute key.

The challenge is how to set the attribute into the request.

Setting Services in a Portlet You Control

So when you control the portlet code, injecting the service reference is pretty easy.

In your portlet class, you're going to add your @Reference for the service you need to pass. Your portlet class would include something along the lines of:

@Reference(unbind = "-") protected void setTrashHelper(TrashHelper trashHelper) { this._trashHelper = trashHelper; } private TrashHelper _trashHelper;

With the reference available, you'll then override the render() method to set the attribute:

@Override public void render(RenderRequest renderRequest, RenderResponse renderResponse) throws IOException, PortletException { renderRequest.setAttribute(TrashHelper.class.getName(), _trashHelper); super.render(renderRequest, renderResponse); }

So this sets the service as an attribute in the render request. On the JSP side, it would be able to get the service via the code shared above.

Setting Services in a Portlet You Do Not Control

So you may need to build a JSP fragment bundle to override JSP code, and in your override you need to add a service which was not injected by the core portlet.  It would be kind of overkill to override the portlet just to inject missing services.

So how can you inject the services you need? A portlet filter implementation!

Portlet filters are similar to the old servlet filters, they are used to wrap the invocation of an underlying portlet. And, like servlet filters, can make adjustments to requests/responses on the way into the portlet as well as on the way out.

So we can build a portlet filter component and inject our service reference that way...

@Component( immediate = true, property = "javax.portlet.name=com_liferay_dictionary_web_portlet_DictionaryPortlet", service = PortletFilter.class ) public class TrashHelperPortletFilter implements RenderFilter { @Override public void doFilter(RenderRequest renderRequest, RenderResponse renderResponse, FilterChain filterChain) throws IOException, PortletException { filterChain.doFilter(renderRequest, renderResponse); renderRequest.setAttribute(TrashHelper.class.getName(), _trashHelper); } @Reference(unbind = "-") protected void setTrashHelper(TrashHelper trashHelper) { this._trashHelper = trashHelper; } private TrashHelper _trashHelper; }

So this portlet filter is configured to bind to the Dictionary portlet. It will be invoked at each portlet render since it implements a RenderFilter. The implementation calls through to the filter chain to invoke the portlet, but on the way out it adds the helper service to the request attributes.

Conclusion

So we've seen how we can use OSGi services in the JSP files indirectly via request attribute injection. In portlets we control, we can inject the service directly. For portlets we do not control, we can use a portlet filter to inject the service too.

David H Nebinger 2018-11-16T15:21:00Z
Categories: CMS, ECM

Pro Liferay Deployment

Liferay - Fri, 11/16/2018 - 00:15
Introduction

The official Liferay deployment docs are available here: https://dev.liferay.com/discover/deployment

They make it easy for folks new to Liferay to get the system up and running and work through all of the necessary configuration.

But it is not the process followed by professionals. I wanted to share the process I use that it might provide an alternative set of instructions that you can use to build out your own production deployment process.

The Bundle

Like the Liferay docs, you may want to start from a bundle; always start from the latest bundle you can. It is, for the most part, a working system that may be ready to go. I say for the most part because many of the bundles are older versions of the application servers. This may or may not be a concern for your organization, so consider whether you need to update the application server.

You'll want to explode the bundle so all of the files are ready to go.

If you are using DXP, you'll want to download and apply the latest fixpack. Doing this before the first start will ensure that you won't need to deal with an upgrade later on.

The Database

You will, of course, need a database for Liferay to connect to and set up. I prefer to create the initial database using database specific tools. One key aspect to keep in mind is that the database must be set up for UTF-8 support as Liferay will be storing UTF-8 content.

Here's examples for what I use for MySQL/MariaDB:

create database lportal character set utf8; grant all privileges on lportal.* to 'MyUser’@‘192.168.1.5' identified by 'myS3cr3tP4sswd'; flush privileges;

Here's the example I use for Postgres:

create role MyUser with login password 'myS3cr3tP4sswd'; alter role MyUser createdb; alter role MyUser superuser; create database lportal with owner 'MyUser' encoding 'UTF8' LC_COLLATE='en_US.UTF-8' LC_CTYPE='en_US.UTF-8' template template0; grant all privileges on database lportal to MyUser;

There's other examples available for other databases, but hopefully you get the gist.

From an enterprise perspective, you'll have things to consider such as a backup strategy, possibly a replication strategy, a cluster strategy, ... These things will obviously depend upon enterprise needs and requirements and are beyond the scope of this blog post.

Along with the database, you'll need to connect the appserver to the database. I always want to go for the JNDI database configuration rather than sticking the values in the portal-ext.properties. The passwords are much more secure in the JNDI database configuration.

For tomcat, this means going into the conf/Catalina/localhost directory and editing the ROOT.xml file as such:

<Resource name="jdbc/LiferayPool" auth="Container" type="javax.sql.DataSource" factory="com.zaxxer.hikari.HikariJNDIFactory" minimumIdle="5" maximumPoolSize="10" connectionTimeout="300000" dataSource.user="MyUser" dataSource.password="myS3cr3tP4sswd" driverClassName="org.mariadb.jdbc.Driver" dataSource.implicitCachingEnabled="true" jdbcUrl="jdbc:mariadb://dbserver/lportal?characterEncoding=UTF-8&dontTrackOpenResources=true&holdResultsOpenOverStatementClose=true&useFastDateParsing=false&useUnicode=true" /> Elasticsearch

Elasticsearch is also necessary, so the next step is to stand up your ES solution. Could be one node or a cluster. Get your ES system set up and collect your IP address(es). Verify that firewall rules allow for connectivity from the appserver(s) to the ES node(s).

With the ES servers, create an ES configuration file, com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config in the osgi/config directory and set the contents:

operationMode="REMOTE" clientTransportIgnoreClusterName="false" indexNamePrefix="liferay-" httpCORSConfigurations="" additionalConfigurations="" httpCORSAllowOrigin="/https?://localhost(:[0-9]+)?/" networkBindHost="" transportTcpPort="" bootstrapMlockAll="false" networkPublishHost="" clientTransportSniff="true" additionalIndexConfigurations="" retryOnConflict="5" httpCORSEnabled="true" clientTransportNodesSamplerInterval="5s" additionalTypeMappings="" logExceptionsOnly="true" httpEnabled="true" networkHost="[_eth0_,_local_]" transportAddresses=["lres01:9300","lres02:9300"] clusterName="liferay" discoveryZenPingUnicastHostsPort="9300-9400"

Obviously you'll need to edit the contents to use local IP address(es) and/or name(s). This can and should all be set up before the Liferay first start.

Portal-ext.properties

Next is the portal-ext.properties file. Below is the one that I typically start with as it fits most of the use cases for the portal that I've used. All properties are documented here: https://docs.liferay.com/ce/portal/7.0/propertiesdoc/portal.properties.html

company.default.web.id=example.com company.default.home.url=/web/example default.logout.page.path=/web/example default.landing.page.path=/web/example admin.email.from.name=Example Admin admin.email.from.address=admin@example.com users.reminder.queries.enabled=false session.timeout=5 session.timeout.warning=0 session.timeout.auto.extend=true session.tracker.memory.enabled=false permissions.inline.sql.check.enabled=true layout.user.private.layouts.enabled=false layout.user.private.layouts.auto.create=false layout.user.public.layouts.enabled=false layout.user.public.layouts.auto.create=false layout.show.portlet.access.denied=false redirect.url.security.mode=domain browser.launcher.url= index.search.limit=2000 index.filter.search.limit=2000 index.on.upgrade=false setup.wizard.enabled=false setup.wizard.add.sample.data=off counter.increment=2000 counter.increment.com.liferay.portal.model.Layout=10 direct.servlet.context.reload=false search.container.page.delta.values=20,30,50,75,100,200 com.liferay.portal.servlet.filters.gzip.GZipFilter=false com.liferay.portal.servlet.filters.monitoring.MonitoringFilter=false com.liferay.portal.servlet.filters.sso.ntlm.NtlmFilter=false com.liferay.portal.servlet.filters.sso.opensso.OpenSSOFilter=false com.liferay.portal.sharepoint.SharepointFilter=false com.liferay.portal.servlet.filters.validhtml.ValidHtmlFilter=false blogs.pingback.enabled=false blogs.trackback.enabled=false blogs.ping.google.enabled=false dl.file.rank.check.interval=-1 dl.file.rank.enabled=false message.boards.pingback.enabled=false company.security.send.password=false company.security.send.password.reset.link=false company.security.strangers=false company.security.strangers.with.mx=false company.security.strangers.verify=false #company.security.auth.type=emailAddress company.security.auth.type=screenName #company.security.auth.type=userId field.enable.com.liferay.portal.kernel.model.Contact.male=false field.enable.com.liferay.portal.kernel.model.Contact.birthday=false terms.of.use.required=false # ImageMagick imagemagick.enabled=false #imagemagick.global.search.path[apple]=/opt/local/bin:/opt/local/share/ghostscript/fonts:/opt/local/share/fonts/urw-fonts imagemagick.global.search.path[unix]=/usr/bin:/usr/share/ghostscript/fonts:/usr/share/fonts/urw-fonts #imagemagick.global.search.path[windows]=C:\\Program Files\\gs\\bin;C:\\Program Files\\ImageMagick # OpenOffice # soffice -headless -accept="socket,host=127.0.0.1,port=8100;urp;" openoffice.server.enabled=true # xuggler xuggler.enabled=true #hibernate.jdbc.batch_size=0 hibernate.jdbc.batch_size=200 cluster.link.enabled=true ehcache.cluster.link.replication.enabled=true cluster.link.channel.properties.control=tcpping.xml cluster.link.channel.properties.transport.0=tcpping.xml cluster.link.autodetect.address=dbserver company.security.auth.requires.https=true main.servlet.https.required=true atom.servlet.https.required=true axis.servlet.https.required=true json.servlet.https.required=true jsonws.servlet.https.required=true spring.remoting.servlet.https.required=true tunnel.servlet.https.required=true webdav.servlet.https.required=true rss.feeds.https.required=true dl.store.impl=com.liferay.portal.store.file.system.AdvancedFileSystemStore

Okay, so first of all, don't just copy this into your portal-ext.properties file as-is. You'll need to edit it for names, sites, addresses, etc. It also enables clusterlink and sets up use of https as well as the advanced filesystem store.

I tend to use TCPPING for my ClusterLink configuration as unicast doesn't have some of the connectivity issues. I use a standard configuration (seen below), and use the tomcat setenv.sh file to specify the initial hosts.

<!-- TCP based stack, with flow control and message bundling. This is usually used when IP multicasting cannot be used in a network, e.g. because it is disabled (routers discard multicast). Note that TCP.bind_addr and TCPPING.initial_hosts should be set, possibly via system properties, e.g. -Djgroups.bind_addr=192.168.5.2 and -Djgroups.tcpping.initial_hosts=192.168.5.2[7800] author: Bela Ban --> <config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:org:jgroups" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd"> <TCP bind_port="7800" recv_buf_size="${tcp.recv_buf_size:5M}" send_buf_size="${tcp.send_buf_size:5M}" max_bundle_size="64K" max_bundle_timeout="30" use_send_queues="true" sock_conn_timeout="300" timer_type="new3" timer.min_threads="4" timer.max_threads="10" timer.keep_alive_time="3000" timer.queue_max_size="500" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="8" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="true" thread_pool.queue_max_size="10000" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="discard"/> <TCPPING async_discovery="true" initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}" port_range="2"/> <MERGE3 min_interval="10000" max_interval="30000"/> <FD_SOCK/> <FD timeout="3000" max_tries="3" /> <VERIFY_SUSPECT timeout="1500" /> <BARRIER /> <pbcast.NAKACK2 use_mcast_xmit="false" discard_delivered_msgs="true"/> <UNICAST3 /> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="4M"/> <pbcast.GMS print_local_addr="true" join_timeout="2000" view_bundling="true"/> <MFC max_credits="2M" min_threshold="0.4"/> <FRAG2 frag_size="60K" /> <!--RSVP resend_interval="2000" timeout="10000"/--> <pbcast.STATE_TRANSFER/> </config>

Additionally, since I want to use the Advanced Filesystem Store, I need a osgi/config/com.liferay.portal.store.file.system.configuration.AdvancedFileSystemStoreConfiguration.cfg file with the following contents:

## ## To apply the configuration, place this file in the Liferay installation's osgi/modules folder. Make sure it is named ## com.liferay.portal.store.file.system.configuration.AdvancedFileSystemStoreConfiguration.cfg. ## rootDir=/liferay/document_library JVM & App Server Config

So of course I use the Deployment Checklist to configure JVM, GC and memory configuration. I do prefer to use at least an 8G memory partition. Also I add the JGroups initial hosts.

Conclusion

Conclusion? But we haven't really started up the portal yet, how can this be the conclusion?

And that is really the point. All configuration is done before the portal is launched. Any other settings that could be changed in the System Settings control panel, well those I would also create the osgi/config file(s) to hold the settings.

The more that is done in configuration pre-launch, the less likelihood there is of getting unnecessary data loaded, user public/private layouts that might not be needed, proper filesystem store out of the gate, ...

It really is how the pros get their Liferay environments up and running...

David H Nebinger 2018-11-16T05:15:00Z
Categories: CMS, ECM

Boosting Search

Liferay - Thu, 11/15/2018 - 17:43
Introduction

A client recently was moving off of Google Search Appliance (GSA) on to Liferay and Elasticsearch. One key aspect of GSA that they relied on though, was KeyMatch.

What is KeyMatch? Well, in GSA an administrator can define a list of specific keywords and assign content to them. When a user performs a search that includes one of the specific keywords, the associated content is boosted to the top of the search results.

This way an admin can ensure that a specific piece of content can be promoted as a top result.

For example, you run a bakery. During holidays, you have specially decorated cakes and cupcakes. You might do a KeyMatch search for "cupcake" to your specialty cupcakes so when a user searches, they get the specialty cakes over your normal cupcakes.

Elasticsearch Tuning

So Elasticsearch, the heart of the Liferay search facilities, does not have KeyMatch support. In fact, often it may seem that there is little search result tuning capabilities at all. In fact, this is not the case.

There are tuning opportunities for Elasticsearch, but it does take some effort to get the outcomes you're hoping for.

Tag Boosting

So one way to get a result similar to KeyMatch would be to boost the match for tags.

In our bakery example above, all of our contents related to cupcakes will, of course, appear as search results for "cupcake" if only because the keyword is part of our content. Tagging content with "cupcake" would also get it to come up as a search result, but may not make it score high enough to make them stand out as results.

We could, however, use tag boosting so that a keyword match on a tag would push a match to the top of the search results.

So how do you implement a tag boost? Through a custom IndexPostProcessor implementation.

Here's one that I whipped up that will boost tag matches by 100.0:

@Component( immediate = true, property = { "indexer.class.name=com.liferay.journal.model.JournalArticle", "indexer.class.name=com.liferay.document.library.kernel.model.DLFileEntry" }, service = IndexerPostProcessor.class ) public class TagBoostIndexerPostProcessor extends BaseIndexerPostProcessor implements   IndexerPostProcessor { @Override public void postProcessFullQuery(BooleanQuery fullQuery, SearchContext searchContext)   throws Exception { List<BooleanClause<Query>> clauses = fullQuery.clauses(); if ((clauses == null) || (clauses.isEmpty())) { return; } Query query; BooleanQueryImpl queryImpl; for (BooleanClause<Query> clause : clauses) { query = clause.getClause(); updateBoost(query); } } protected void updateBoost(final Query query) { if (query instanceof BooleanClauseImpl) { BooleanClauseImpl<Query> booleanClause = (BooleanClauseImpl<Query>) query; updateBoost(booleanClause.getClause()); } else if (query instanceof BooleanQueryImpl) { BooleanQueryImpl booleanQuery = (BooleanQueryImpl) query; for (BooleanClause<Query> clause : booleanQuery.clauses()) { updateBoost(clause.getClause()); } } else if (query instanceof WildcardQueryImpl) { WildcardQueryImpl wildcardQuery = (WildcardQueryImpl) query; if (wildcardQuery.getQueryTerm().getField().startsWith(Field.ASSET_TAG_NAMES)) { query.setBoost(100.0f); } } else if (query instanceof MatchQuery) { MatchQuery matchQuery = (MatchQuery) query; if (matchQuery.getField().startsWith(Field.ASSET_TAG_NAMES)) { query.setBoost(100.0f); } } } }

So this is an IndexPostProcessor implementation that is bound to all JournalArticles and DLFileEntries. When a search is performed, the postProcessFullQuery() method will be invoked with the full query to be processed and the search context. The above code will be used to identify all tag matches and will increase the boost for them.

This implementation uses recursion because the passed in query is actually a tree; processing via recursion is an easy way to visit each node in the tree looking for matches on tag names.

When a match is found, the boost on the query is set to 100.0.

Using this implementation, if a single article is tagged with "cupcake", a search for "cupcake" will cause those articles with the tag to jump to the top of the search results.

Other Modification Ideas

This is an example of how you can modify the search before it is handed off to Elasticsearch for processing.

It can be used to remove query items, change query items, add query items, etc.

It can also be used to adjust the query filters to exclude items from search results.

Conclusion

So the internals of the postProcessFullQuery() method and arguments are not really documented, at least not anywhere in detail that I could find for adjusting the query results.

Rather than reading through the code for how the query is built, when I was creating this override, I actually used a debugger to check the nodes of the tree to determine types, fields, etc.

I hope this will give you some ideas about how you too might adjust your search queries in ways to manipulate search results to get the ordering you're looking for.

David H Nebinger 2018-11-15T22:43:00Z
Categories: CMS, ECM

Liferay And...  Jackson

Liferay - Tue, 11/13/2018 - 18:27
Introduction

So often when discussing how to deal with dependencies, we're often looking for ways to package our third party jars into our custom modules.

There's good reason to do this. It ensures that our modules get a version of a third party jar that we've tested with. It also excludes ambiguity over where the dependency will come from, whether it is deployed and available or not, etc.

That said, there is another option that we don't really talk about much, even though it is still a viable one. Many third party jars are actually OSGi-ready and can be deployed as modules separately from your own custom modules.

Jackson, for example, is actually module jars on their own and can be deployed to Liferay just by dropping them into the deploy folder.

Dependencies Deployed as Modules

So why deploy a third party dependency jar as an OSGi module instead of just as an embedded jar?

Often it comes down to either a concern about class loaders or, less frequently, a (misguided) attempt to shrink general modules size. I usually say this is misguided because memory and disk consumption is cheap, and the problems (to be discussed below) often are not worth it.

So what about the class loader concern? Well, when you are using a system which uses class loaders to instantiate java classes, such as with Jackson and it marshaling JSON into Java objects with Jackson annotations, class loader hierarchies and the normal boundaries between OSGi modules can make general OSGi usage a challenge when the annotations are used in different bundles.

For example, if you have module A and module B and both have POJOs decorated with Jackson annotations, you could run into issues with the annotations. When performing a package scan for classes decorated with the annotations, if the annotation is loaded by a different class loader it is effectively a different class and may not be visible during annotation processing.

If your package is deployed as a standalone module, though, then all bundles sharing the dependency will pull from the same module and therefore the same class loader.

A downside of this, though, is with versioning. If you deploy Jackson 2.9.3 and 2.9.7, there are two competing versions available and can still lead to class loader issues when the different versions are used at the same time. In the case of just a single version deployed, then you have the typical concern of all modules stuck using an agreed upon version.

Is My Dependency OSGi Ready?

So the first thing you'll need to know is whether your third party dependency jar is an OSGi module or not.

The most complicated way to find out is by opening up your jar with a zip tool to look at the contents. If the jar is a bundle, the META-INF/MANIFEST.MF file will contain the OSGi headers like Bundle-Name, Bundle-SymbolicName, Bundle-Version, etc. Additionally you may have OSGi-specific files in the META-INF folder for declarative services.

An easier way is just to use one of the Maven repo search tools. When looking for jackson-core 2.9.6 in mvnrepository.com, you come across the page like https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-core/2.9.6:

Under the Files section, it is shown as a bundle w/ the size. This means it is an OSGi-ready bundle. When not OSGi-ready, the search tools will typically show it as just a jar.

This and the other Jackson jars are all marked as bundles, so I know I can deploy them as modules.

Deploying Jackson as Modules

So for a future "Liferay And..." blog post, I have need of Jackson as a module instead of as just a dependency, so in this post we're going to focus on deploying Jackson as modules. Sure this may not be necessary for all deployments or usage of Jackson, but it is for me.

Okay, so our test is going to be to build a couple of modules w/ some POJOs decorated with Jackson annotations and a module that will be marshaling to/from JSON. In order to do this, we need to have the following Jackson modules deployed:

  • compile group: 'com.fasterxml.jackson.core', name: 'jackson-core', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.core', name: 'jackson-annotations', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-jdk8', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-jsr310', version: '2.9.6'
  • compile group: 'com.fasterxml.jackson.module', name: 'jackson-module-parameter-names', version: '2.9.6'

Download these bundle jars and drop them into the Liferay deploy folder while Liferay is running. The bundles will deploy into Liferay and you'll see the messages in the log that the bundles are deployed and started.

Building Test Modules

So in the referenced Github repo, we'll build out a Liferay workspace with three module projects:

  1. Animals - Defines the POJOs with Jackson annotations for defining different pet instances.
  2. Persons - Defines a POJO for a person to define their set of pets.
  3. Mappings - Services based upon using Jackson to marshal to/from JSON.
  4. Gogo-Commands - Provides some simple Gogo commands that we can use to test the modules w/o building out a portlet infrastructure.

The Github repo is: https://github.com/dnebing/liferay-and-jackson

The animals and persons modules are nothing fancy, but they do leverage the Jackson annotations from the deployed OSGi Jackson modules.

The mappings module uses the Jackson ObjectMapper to handle the marshaling. It is capable of processing classes from the other modules.

The gogo module contains some simple gogo shell commands:

Command Description jackson:createCatCreates a Cat instance and outputs the toString representation of it. Args are [name [breed [age [favorite treat]]]]. jackson:createDogCreates a Dog instance and outputs the toString representation of it. Args are [name [breed [age [likes pigs ears]]]]. jackson:catJsonLike createCat, but outputs the JSON representation of the cat. jackson:dogJsonLike createDog, but outputs the JSON representation of the dog. jackson:catParses the given JSON into a Cat object and outputs the toString representation of it. The only argument is the JSON. jackson:dogParses the given JSON into a Dog object and outputs the toString representation of it. The only argument is the JSON.

For the cat and dog commands, to pass JSON as a single argument, enclose it in single quotes:

jackson:cat '{"type":"cat","name":"claire","age":6,"breed":"house","treat":"filets"}'

Conclusion

Seems like an odd place to stop, huh?

I mean, we have identified how to find OSGi-ready modules such as the Jackson modules, we have deployed them to Liferay, and we have built modules that depend upon them.

So why introduce Jackson like this and then stop? Well, it is just a preparatory blog for my next post, Liferay And... MongoDB. We'll be leveraging Jackson as part of that solution, so starting with the Jackson deployment is a good starting point.  See you in the next post!

David H Nebinger 2018-11-13T23:27:00Z
Categories: CMS, ECM

Get that old school page editing touch back again

Liferay - Tue, 11/13/2018 - 04:56

At this years Unconference at DEVCON in Amsterdam, Victor Valle kept a session about changing the look-and-feel of the portal administration. Things like removing the default product menu and creating your own. I missed that discussion because so many other interesting discussions were going on, so I have no idea if someone brought this up. In this blog article, I'm not going to do anything "drastic" like like removing a side menu. I just want to show you how easy it is to get something back which some of us missed since version 6.2.

I've heard some people complain about the way the page editing works since version 7.0, and it seems like themelets are still pretty much an underrated or unknown addition to Liferay theming. I hope this blog will tackle both.

By default you'll have to hover over a portlet or widget to get the configuration options:

Nothing wrong with it. The page is displayed as anyone without editing rights would see the page. No need to toggle that icon which shows/hides the controls. But for those who have to do a lot of page editing or widget configuration, ... every second counts. We just want to click on the ellipsis icon to configure that widget without having to hover over it first.

Some of us want this:

There are different ways to get this toggle controls feature back

After all, the controls icon did not disappear. It only becomes visible when your screensize is small enough. So it's all a matter of tweaking the styles.

Option 1: Portlet decorators

You can easily add your own portlet decorator and make it the default one in your theme. You just need to add the custom decorator in the  look-and-feel.xml of your theme:

<portlet-decorator id="show-controls" name="Show Controls"> <default-portlet-decorator>true</default-portlet-decorator> <portlet-decorator-css-class>portlet-show-controls</portlet-decorator-css-class> </portlet-decorator>

Then you will want to add some css to the theme so the toggle controls icon is displayed at all time. Toglling the icon will add a css class "controls-visible" to the body element. This is easy, just display the "portlet-topper" everytime "controls-visible" is present.

Why this option is bad for this purpose: I believe portlet decorators are meant for styling purposes. How do you want the user to see the widget when you select a certain decorator. When you want to use portlet decorators for this purpose and you want to display widgets in different ways, with(out) borders or titles... This means you'll lose the controls anyway when you assign a different decorator to a widget. I bet you'll say: "Just put the extra styles on every widget". So just forget I even brought this up.

Option 2: Add some css and js

We'll add the custom css to the theme:

/* old school portlet decorators, don't mind the shameless usage of !important */ .control-menu .toggle-controls { display: block !important; } .controls-visible.has-toggle-controls { .portlet-topper { display: -webkit-box !important; display: -moz-box !important; display: box !important; display: -webkit-flex !important; display: -moz-flex !important; display: -ms-flexbox !important; display: flex !important; position: relative; opacity: 1; transform: none !important; } section.portlet { border: 1px solid #8b8b8b; border-radius: 0.5rem; } } .controls-hidden { .portlet-topper { display: none !important; } }

We're not there yet. We'll need to put a "has-toggle-controls" class on the body element because it seems "controls-visible" is there by default, even when you're not logged in. And in this case I want to display a border around the portlet when the controls are active. So I'll be setting my own css class like this inside main.js:

var toggleControls = document.querySelector('.toggle-controls'); if (toggleControls !== null) { document.body.classList.add('has-toggle-controls'); }

But what if we had lots of different themes. Are we really going to add the same code over and over again? And what if business decides to add a box-shadow effect after a few weeks?

Option 3: Themelets

A themelet is an extension of a theme. You can extend all your themes with the same themelet. When some style needs to change, you just edit the themelet and rebuild your themes. This will require you to use the liferay-theme-generator. But I bet everyone does by now, right? More information on how to build and use themelets.

You can find the themelet I wrote on github.

Please be sure your theme's _custom.scss imports the css from themelets:

/* inject:imports */ /* endinject */

And your portal_normal.ftl should contain:

<!-- inject:js --> <!-- endinject --> Conclusion

When changing the behaviour or design of certain aspects which are not design (in the eye of the end users or guest users) related, I think its always better to go with themelets. You'll rarely come across a portal with only one theme. So using themelets will make it so much easier to maintain your themes.

Michael Adamczyk 2018-11-13T09:56:00Z
Categories: CMS, ECM

The reason we moved to 7zip bundles

Liferay - Mon, 11/12/2018 - 14:51

As some of you may have already discovered, 7.1 GA2 was released as a 7zip bundle instead of the typical zip bundle. This probably caused a ton of issues. Even our own Dev Tools are not yet equipped to handle 7z files since all the events took up our time.

I will provide you with the reasons why we had to make this move and hopefully, everyone will come to the conclusion that this was the best decision, albeit, the communication could have been handled significantly better.

The original goal that led to providing 7z bundles was to improve startup times. We discovered that if we prepopulate the OSGi state, we were able to significantly reduce startup times by 2-3 times. As we began our testing, our zip bundles were not preserving timestamps correctly. They were rounding our timestamps to the nearest seconds which invalidated our OSGi state. We also found that our bundles had grown to 1.2 gigabytes!

This improvement imposed 2 requirements:
  • maintain the original timestamp
  • significantly increase the number of duplicate files.

We began to look for solutions. Naturally, tar.gz was the first solution that came to mind. It would easily preserve the timestamps but it did not solve the file size issue. While some people may find a large download acceptable, we did not believe that it would be appropriate for some of our use cases. As a result, someone suggested that we investigate 7zip because 7zip will actually detect for duplicate files and treat them as a single file during compression. This significantly brought down the file size from 1.2 gigabytes to 400 megabytes. It was the perfect solution for us. So this is why we have ultimately decided to use 7zip instead of zips. 

Since our initial development, we have also fixed the duplicate file issue. This means that tar.gz is also viable as a solution (though the bundles are slightly larger at 600 megabytes). From now on, we will be providing 7zip bundles and also tar.gz bundles. Internally we will be using 7zip because ultimately that 200-megabyte difference is still too significant for our use cases, but for everyone else, you guys can decide what works best for you.  

David Truong 2018-11-12T19:51:00Z
Categories: CMS, ECM

Liferay Portal 7.1 CE GA2 Release

Liferay - Mon, 11/05/2018 - 23:00
What's New Downloads

Download the release now at: https://www.liferay.com/downloads-community

New Features Summary

Web Experience:  Fragments allow a content author to create small reusable content pieces. Fragments can be edited in real time or can be exported and managed with the tooling of your choice. Use content pages from within a site to have complete control over the layout of your pages.  Navigation Menus let's you create site navigation in new and interesting ways and have full control over the navigations visual presentation.
 


Forms Experience: Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form. Using the new Element Sets, form creators can now create groups of reusable components. Forms fields can now be translated into any language using any locale and can also be easily duplicated.
 


Redesigned System Settings: System Settings have received a complete overhaul. Configuration options have been logically grouped together making it easier than ever before to find what's configurable. Several options that were located under Server Administration have also been moved to System Settings.
 


User Administration: The User account form has been completely redesigned. Each form section can now be saved independently of each other minimizing the chance of losing changes. The new ScreensNavigationEntry let's developers add custom forms under user administration.

Improvements to Blogs and Forums: Blog readers a can now un-subscribe to notifications via email.  Blog authors now have complete control over the friendly URL of the entry.  Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.

Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch. Message boards users can now attach as many files as they want by dragging and dropping them in a post. Message boards also has had many visual updates.
 


Workflow Improvements: Workflow has received a complete UI overhaul. All workflow configuration is now consolidated under one area in the Control Panel. Workflow definitions are now versioned and previous versions can be restored. Workflow definitions can now be saved in draft form and published live when they are ready.

Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 support and the inclusion of Tomcat 9.0. 

Documentation

Official Documentation can be found on Liferay Developer network.  For information on upgrading, see the Upgrade Guide.

Bug Reporting

If you believe you have encountered a bug in the new release you can report your issue on issues.liferay.com, selecting the "7.1.0 CE GA2" release as the value for the "Affects Version/s" field.

Getting Support

Support is provided by our awesome community. Please visit our community website for more details on how you can receive support.

Liferay and its worldwide partner network also provides services, support, training, and consulting around its flagship enterprise offering, Liferay DXP.

Also note that customers on existing releases such as 6.2 and 7.0 continue to be professionally supported, and the documentation, source, and other ancillary data about these releases will remain in place.

Jamie Sammons 2018-11-06T04:00:00Z
Categories: CMS, ECM

Mitigating RichFaces 4.5.17.Final EOL Vulnerabilities

Liferay - Mon, 11/05/2018 - 18:16
Mitigating RichFaces 4.5.17.Final EOL Vulnerabilities

If you are using RichFaces, you should be aware that Code White has discovered some remote code execution vulnerabilities in the component library. Unfortunately, since RichFaces has reached end-of-life status, these vulnerabilities will not be fixed. Thankfully there two easy options to mitigate these vulnerabilities:

  1. Migrate to Alberto Fernandez’s fork of RichFaces.

    Alberto has fixed the known security vulnerabilities and other issues with RichFaces, so you should be able to upgrade to his latest release with little trouble:

    <dependency> <groupId>com.github.albfernandez.richfaces</groupId> <artifactId>richfaces</artifactId> <version>4.6.5.ayg</version> </dependency>
  2. Disable resource serialization.

    RichFaces has a whitelist of classes that it will deserialize. By setting the whitelist to empty you can avoid this remote code execution vulnerability. Just add the following content to a file named src/main/resources/org/richfaces/resource/resource-serialization.properties in your Maven or Gradle project:

    # Disable resource serialization to disallow remote code execution: # CVE-2013-2165, RF-14310, CVE-2015-0279, RF-13977, and RF-14309. # See https://codewhitesec.blogspot.com/2018/05/poor-richfaces.html for more details. whitelist=

The Liferay Faces team has used the second mitigation method to protect our RichFaces demos and archetypes. We have released new versions of our RichFaces archetypes with the mitigation included. Please see the release notes for more details.

Kyle Joseph Stiemann 2018-11-05T23:16:00Z
Categories: CMS, ECM

For a more conspicuous SPA loading indicator

Liferay - Thu, 11/01/2018 - 17:15

// The french version of this article can be found here: Pour un indicateur de chargement SPA plus visible.

Since version 7.0 of Liferay, you surely noticed the apparition of a thin loading bar on top of screen, after most of user actions.

This loading bar is part of the new SPA (Single Page Application) mode of Liferay, supported by the Senna.js framework.

Unfortunately, this bar is so inconspicuous that most users do not see it. In general, without a visual feedback related to their action, they reiterate their action several times, which often lengthens the waiting time.

In the end, users are often unnecessarily frustrated just because this load indicator is not visible enough.

Fortunately, it's quite simple to fix this with a few lines of CSS in a custom theme, because this loading bar is just a single HTML tag on which a CSS class is dynamically applied.

<div class="lfr-spa-loading-bar"></div>

As a starting point, we can consider the superb loaders provided by Luke Haas in his project Single Element CSS Spinners. Just make some adaptations to get a CSS loader compatible with Liferay:

/* Reset properties used by the original loader */ .lfr-spa-loading .lfr-spa-loading-bar, .lfr-spa-loading-bar { -moz-animation: none 0 ease 0 1 normal none running; -webkit-animation: none 0 ease 0 1 normal none running; -o-animation: none 0 ease 0 1 normal none running; -ms-animation: none 0 ease 0 1 normal none running; animation: none 0 ease 0 1 normal none running; display: block; -webkit-transform: none; -moz-transform: none; -ms-transform: none; -o-transform: none; transform: none; background: transparent; right: initial; bottom: initial; } /* Pure CSS loader from https://projects.lukehaas.me/css-loaders */ .lfr-spa-loading .lfr-spa-loading-bar, .lfr-spa-loading .lfr-spa-loading-bar:after { border-radius: 50%; width: 10em; height: 10em; z-index: 1999999; } .lfr-spa-loading .lfr-spa-loading-bar { margin: 60px auto; font-size: 10px; text-indent: -9999em; border-top: 1.1em solid rgba(47, 164, 245, 0.2); border-right: 1.1em solid rgba(47, 164, 245, 0.2); border-bottom: 1.1em solid rgba(47, 164, 245, 0.2); border-left: 1.1em solid #2FA4F5; -webkit-transform: translateZ(0); -ms-transform: translateZ(0); transform: translateZ(0); -webkit-animation: load8 1.1s infinite linear; animation: load8 1.1s infinite linear; } @-webkit-keyframes load8 { 0% { -webkit-transform: rotate(0deg); transform: rotate(0deg); } 100% { -webkit-transform: rotate(360deg); transform: rotate(360deg); } } @keyframes load8 { 0% { -webkit-transform: rotate(0deg); transform: rotate(0deg); } 100% { -webkit-transform: rotate(360deg); transform: rotate(360deg); } } /* Positionning */ .lfr-spa-loading .lfr-spa-loading-bar { position: fixed; top: 50%; left: 50%; margin-top: -5em; margin-left: -5em; }

Once the custom theme is applied we get a loader clearly visible that no user can miss:

This snippet support Liferay 7.0 and 7.1 and is also available on gist.

If you also have tips to improve the UX of a portal Liferay, feel free to share them in the comments of this post or in a dedicated blog post.

Sébastien Le Marchand
Freelance Technical Consultant in Paris

Sébastien Le Marchand 2018-11-01T22:15:00Z
Categories: CMS, ECM
Syndicate content