AdoptOS

Assistance with Open Source adoption

Open Source News

Announcing 1st Annual CiviCRM Governance Summit & Code Sprint

CiviCRM - Sun, 04/15/2018 - 17:04

The CiviCRM Core Team is pleased to announce what we hope will become an annual event; a combined governance summit and code sprint. This year’s event will begin on September 25th, immediately following CiviCamp Hartford, and will be located in West Milford, New Jersey (within an hour from major airports). Full event details including agenda and discussion are online (or will be soon) here: https://lab.civicrm.org/community-team/governance-summit-code-sprint/wikis/home

Categories: CRM

The Essentials of CiviVolunteer training session - April 17th at 10 am MT

CiviCRM - Sun, 04/15/2018 - 09:43

Your volunteers are a crucial part of your organization’s team and contribute in many ways that help your non-profit reach its goals. CiviVolunteer 2.0 is a CiviCRM extension created to help organizations effectively manage this important aspect of their daily operations.

This two hour on-line training will cover all the essential components of CiviVolunteer 2.0 that will allow you to better track, organize and communicate with your volunteers.

Categories: CRM

Liferay Portal 7.1 Alpha 1 Release

Liferay - Thu, 04/12/2018 - 17:51
I'm pleased to announce the immediate availability of: Liferay Portal 7.1 Alpha 1
 
  Download Now!

We announced the Liferay 7.1 Community Beta Program on February 19th alongside our first 7.1 Milestone release.  We launched the first phase of the community beta program which was to receive feedback from our community on new features being released in each milestone release.  Our awesome community heeded the call with over 120 participants and over 130 posts to the feedback forum.  We greatly appreciate all the feedback generated from our community.  Based on some of the feedback we even made changes to the product itself! 

With that being said it is my pleasure to announce Liferay 7.1 Alpha 1.  With the release of Liferay 7.1 Alpha 1 we also would like to launch phase 2 of our beta program: Bug Reports.  If you run into an issue using Alpha 1, please let us know by posting it in our Feedback Forums.  If you have yet to sign up for the beta program, it's never too late.  Sign up today!

New Features Summary Modern Site Building: Liferay 7.1 introduces a new way of adding content.  Fragments allows a content author to create content in small reusable pieces.  Fragments can be edited in real time or can be exported and managed with the tooling of your choice.  Use page templates from within a site and have complete control over the layout of your content pages.  Navigation menus now give you complete control over site navigation.  Create site navigation in new and interesting ways and have full control over the navigations visual presentation.        Forms Experience: Liferay 7.1 includes a completely revamped forms experience.  Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form.  Using the new Element Sets, form creators can now create groups of reusable components.  Forms fields can now be translated into any language using any Liferay locale and can also be easily duplicated. 
    Redesigned System Settings: System Settings has received a complete overhaul.  Configurations have been logically grouped together making it easier than every before to find what's configurable.  Several options that were located on Server Administration have also been moved to System Settings.     User Administration: User account from has been completely redesigned.  Each form section can now be saved independently of each other minimizing the chance of losing changes.  The new ScreensNavigationEntry let's developers add any form they want to user administration.     Improvements to Blogs and Forums:  Blog readers a can now un-subscribe to notifications via email. Friendly URLs used to be generated based on the entries title. Authors now have complete control over the friendly URL of the entry.   Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.     Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch.  Message boards users can now attach as many files as they want by dragging and dropping them in a post.  Message boards also has had many visual updates.     Workflow Improvements: Workflow has received a complete UI overhaul.  All workflow configuration is now consolidated under one area in the Control Panel.  Workflow definitions are now versioned and previous versions can now be restored.  Workflow definitions can now be saved in draft form and published live when they are ready.     Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 and the inclusion of Tomcat 9.0.  At the time of this release  JDK 8 is still the only supported JDK.   Known Issues Documentation Documentation for Liferay 7.1 is well underway.  Many sections have already been completed in the Deployment and Development Sections.  For information on upgrading to 7.1 see the Upgrade Guide. Jamie Sammons 2018-04-12T22:51:22Z
Categories: CMS, ECM

How to Measure Customer Experiences

Liferay - Thu, 04/12/2018 - 12:28

A well-constructed and effective customer experience is a crucial part of business strategies today. No matter the industry, customer experiences (CX) that meet target audience needs and help convert them into customers are a crucial part of continued commercial success. But due to its often difficult to define nature, CX can be difficult for companies to gauge in order to determine its value and make changes that improve ROI.

However, measuring CX successfully can be done with the right insights and, when done correctly, can help businesses effectively shape and refine their strategies.

Requirements for Measuring Customer Experience

According to Gartner, there are three conditions that must be met by companies in order to successfully implement customer experience measurements.

  1. Measure CX Across Levels of Management - Companies should work to understand how customer experience impacts various levels of management, ranging from C-suite executives to operational leaders across the organization. These various measurements show how CX affects business outcomes, cross-departmental issues, department tactics and more. Should you only focus on one level, valuable insights may be missed.
  2. Include Metrics from All Departments - As opposed to the vertical nature of the first condition, this second condition is horizontal in nature and is meant to encompass the many different teams that make up a company. A metric such as customer satisfaction can vary between departments and reflect how each impacts the experience. Measuring such a metric across a business will provide comprehensive data and insights regarding where improvements should be made and how they may need to differ depending on the department.
  3. Balance the Rational with the Emotional - Businesses should not only measure the quality of the services provided, but the emotions they provoke within customers. Customers will have an emotional reaction to the treatment they receive from a company and these reactions will influence rational decisions. The more they love an experience, the more loyal they will become, and the more they dislike it, the more likely they are to leave for a competitor.

Having these conditions in place will help your team correctly approach metrics to better prevent accidentally skewing data and for an effective application of CX improvements across the company.

Measuring Customer Experience

Because customer experience involves all aspects of a consumer interacting with a business, there are many elements that may be measured by an organization. However, the following aspects can provide highly useful CX insights, regardless of the industry of a company.

  • Customer Satisfaction Scores - Companies should extensively poll customer satisfaction across all departments and keep detailed records to better understand weak points and areas for potential improvement, which can boost customer experience as a whole.
  • Product or Service Quality Metrics - Beyond being satisfied with their experiences, customers should be enabled to rate the products and services they receive. Whether this is through a third-party site or on the company’s own, receiving evaluations of products can show if CX issues are caused by what customers purchase, rather than how a company provides it.
  • Employee Engagement - A workforce that is committed to your company’s goals and values and is determined to perform their best means a more effective team. By measuring their investment in the company through anonymous surveys and performance evaluations, you can better determine how well a team is performing their duties. The result is an ability to equip and train your team as needed.
  • First Call Resolution Rates - An effective customer service center will have a higher likelihood of resolving customers’ problems during their first call or online chat. The higher the resolution rate, the more effective your service. Low rates should be a sign that your customer service is in need of improvement for a better customer experience.
  • Net Promoter Score - Beyond customer satisfaction, a net promoter score determines customer loyalty with one question - “How likely is it that you would recommend our company/product/service to a friend or colleague?” Based on a score of 0 to 10, a company can determine which customers will likely buy more and refer, as well as which will likely not return, according to Forbes. These insights help determine the longevity of a customer base and what should be done to improve CX.
Improving Your Customer Experience

Following a successful customer experience evaluation, companies will have the opportunity to make improvements to various aspects of the experience as needed. Once you have collected your CX metrics, consider the following recommendations on how you should take action.

  • Don’t focus on one major customer experience metric, but take multiple lower-level metrics, such as how various departments field a complaint about a product, into account for a more balanced review of CX.
  • Consider the effects of a change on all departments. While a specific department may have been in charge of tracking a metric, changes regarding how your company does business and interacts with customers must be considered to prevent negatively affecting some departments while improving others.
  • Take customer emotions into consideration. While rational, statistically-backed CX changes are crucial, remember how your audience will emotionally react to the changes you make, both positively and negatively.
  • Determine a hierarchy of metrics to guide CX plans and how you invest both time and money in your improvement efforts. Decide upon what statistics are most important for your company’s performance and what changes should be prioritized when planning both time and financial investment.

By following these guidelines, a company can approach their metrics and CX improvement efforts through a comprehensive strategy.

Transform Your Customer Experience

Whether you are looking to change specific aspects or completely overhaul your customer experience, find the strategic insights you need in our helpful whitepaper.

Read “Four Strategies to Transform Your Customer Experience”   Matthew Draper 2018-04-12T17:28:23Z
Categories: CMS, ECM

Open Source Election System Certified

Open Source Initiative - Thu, 04/12/2018 - 10:22

OSI Affiliate Member, The National Association of Voting Officials (NAVO), announced this week the certification of the Prime lll open source election system for the State of Ohio.

NAVO spokesperson Brent Turner stated the ballot delivery system is, “the first step toward appropriately secure voting systems replacing the ‘secret software‘ systems that have plagued our democracy“. Turner summarized the current proprietary vendor sold U.S. voting systems as, “antiquated, insecure, and a threat to national security,“ and referenced New Hampshire's recent deployment of the “All for One“ open source system based on Prime lll, as further momentum. “We have been focused on Florida, California, and New York to upgrade security and reduce costs as well. Now is the historic moment for all the states to step up and defend our democracy. Paper ballots and audits are a plus, but the essence of vote counting security is the public software.” said Turner.

Recently State of Hawaii Congresswoman Tulsi Gabbard announced Federal legislation embracing the movement toward open source / publicly owned paper ballot systems (see, Rep. Tulsi Gabbard Introduces Securing America’s Elections Act to Ensure Integrity of 2018 Elections, https://gabbard.house.gov/secureelections)

Submitted by Brent Turner, The National Association of Voting Officials

Categories: Open Source

New CiviCRM Extension Provides Graphic, Uncluttered Contact Records

CiviCRM - Wed, 04/11/2018 - 10:18

Continual progress and improvements are some of the key reasons I love working with CiviCRM. A new extension released in March 2018 represents a solid improvement and significantly cleans up the contact record view.

Categories: CRM

Liferay Faces downloads at an all time high

Liferay - Wed, 04/11/2018 - 10:15
I'm happy to report that according to the following graph from Maven Central, the download stats for Liferay Faces are trending upward. In fact, our downloads have approximately doubled with an all time high surpassing 11,000 downloads/month!   The download stats encompass all artifacts such as:
  • Liferay Faces Bridge API
  • Liferay Faces Bridge Impl
  • Liferay Faces Bridge Ext
  • Liferay Faces Alloy
  • Liferay Faces Portal
  • Liferay Faces Util
  • Demo portlets
  • Parent pom.xml descriptors
What's more is that fine-grained download stats show that JSF 2.2 + Liferay 7.0 is the strongest adoption of JSF within Liferay since we started back in 2012.   I would like to personally thank Vernon Singleton, Kyle Stiemann, Cody Hoag, Philip WhiteJuan Gonzalez, and everyone else who has helped make Liferay Faces such a successful project over the years. Also thanks to our faithful JSF community that keeps in close contact with us via the Liferay Faces forums. Well done all! Neil Griffin 2018-04-11T15:15:13Z
Categories: CMS, ECM

PrestaShop’s solutions in response to the new data protection requirements

PrestaShop - Wed, 04/11/2018 - 08:27
As you know, on the 25th of May 2018, the new text on personal data protection will come into effect. You can read our article about it here :
Categories: E-commerce

Improve forecasting accuracy with opportunity line items

VTiger - Tue, 04/10/2018 - 23:54
Every opportunity has an estimated revenue value. But, without the correct line items this value is just a guesstimate. To ensure that you can accurately predict revenue, Vtiger CRM now enables you to add line items to every opportunity. With this update, the opportunity record will precisely show what and how much of it is […]
Categories: CRM

Nonprofits make CiviCRM what it is, literally

CiviCRM - Tue, 04/10/2018 - 12:12

As part of the release notes for each new version, I compile a list of the people who have contributed code or reviewed changes that go into the release.  As you might expect, many of the familiar names from the CiviCRM partners list are there.

However, a sizable number of nonprofits, big and small, write and edit significant amounts of code, adding features and resolving bugs.  Now that CiviCRM 5.0 has been released, I wanted to take the time to thank all of the nonprofit organizations who have contributed code to the 4.7 series:

Categories: CRM

Higher Logic, CiviCRM, BackOffice Thinking announce pivotal integration

CiviCRM - Tue, 04/10/2018 - 11:17

Higher Logic, the leader in association and nonprofit community engagement platforms, and CiviCRM, the open source CRM and AMS leader, are announcing an important integration. The Higher Logic community platform will now be able to work seamlessly with the leading open source AMS and nonprofit CRM.  CiviCRM has over 10,000 active sites.  Additionally, over 2,000 of these sites have at least 25,000 constituents (see Civi stats).

Categories: CRM

Cuatro formas en las que el análisis predictivo en el sector de retail dispara el desempeño de la tienda

Liferay - Tue, 04/10/2018 - 08:47

Los estudios demuestran que, más que nunca, los clientes quieren que sus marcas preferidas prevean sus deseos y necesidades. La compra de un artículo ya no se considera un evento aislado, sino que es parte de una experiencia integrada y continua que desvanece la frontera entre la compra online y offline. Si bien los cambios en las tendencias tecnológicas han tenido efectos evidentes en el retail y, particularmente, en las tiendas físicas de los Estados Unidos, donde ha habido cierres masivos, integrar la tecnología moderna digital con el placer de las compras en tienda física, proporcionará inumerables beneficios para las compañías del sector.

Los beneficios del retail omnicanal incluyen ser capaz de reunir datos antes no explotados sobre el comportamiento de los compradores y de sus interacciones con las marcas. El uso de los datos adquiridos posee un potencial gigantesco. Sin embargo, una de sus formas de uso más impactantes en el universo del retail es la de crear un análisis predictivo del comportamiento de compra del cliente. A través de la comprensión de cómo las acciones pasadas de los compradores impactan sus decisiones futuras, un buen análisis puede anticipar y satisfacer las necesidades permitiendo, además, la creación de estrategias de marketing exitosas, tanto online como offline.

A continuación listamos algunos beneficios del análisis predictivo en el sector de retail y cómo pueden ayudar a determinar el futuro de las empresas del sector de formas que antes se consideraban imposibles.

1. Promociones dirigidas y optimizadas para un público objetivo

Las promociones dirigidas a un público objetivo se utilizan en empresas de todo tipo de industria para agregar una capa de personalización a su comunicación y mejorar su relación con el cliente. Sin embargo, cuando están mal planteadas, estas promociones pueden tener el efecto contrario al deseado. Un estudio hecho por Access Development halló que el 57% de los encuestados considera que recibir un anuncio de un producto después de haberlo valorado negativamente es una de las principales razones para la que cortar la relación con una marca.

Una orientación adecuada de tus promociones significa tener un conocimiento profundo de cada cliente, lo que además te va a aportar la información necesaria sobre las ofertas que reciben. Esto incluye estar al tanto del comportamiento pasado en cuanto a compras y ser capaz de prever cuáles serán sus necesidades futuras, como pueden ser productos complementarios a los que ha comprado u ofertas de reposiciones en base a sus patrones de comportamiento, como puede ser el caso de los cartuchos de tinta para impresoras. Esto puede mejorar las interacciones del cliente con la marca tanto online como en la tienda física, además de fortalecer la relación entre ambos.

2. Búsqueda predictiva

Los websites modernos ayudan a los clientes a encontrar lo que están buscando de forma rápida a través de herramientas de búsqueda eficientes, diseñadas para localizar los resultados adecuados y, por tanto, reducir el tiempo que utilizan para encontrar la respuesta. En este sentido, el uso del análisis predictivo te acerca un paso más al utilizar la personalización para prever qué van a buscar los clientes. Esto incluye tanto autocompletar las búsquedas nada más empezar a escribir la pregunta, como mostrar en las landing pages productos y servicios que los usuarios pueden venir a buscar antes mismo que empiecen a hacerlo. El sistema de analítica de Amazon es uno de los mejores ejemplos a día de hoy. El sistema hace que los usuarios vuelvan siempre a la página web y se interesen por productos que probablemente no verían.

Un sistema de análisis predictivo potente comprende adecuadamente el comportamiento del usuario y hace previsiones precisas, útiles y que van a incentivar el retorno del cliente a la tienda para finalizar su compra sin causarle molestias o interrupciones. Sin embargo, es muy importante que esas previsiones sean lo más fieles posible, ya que ofrecer resultados incorrectos o no deseados pueden complicar el proceso de búsqueda o, potencialmente, ofender a los clientes, tal como ha pasado con un anuncio reciente de embarazos de la empresa Target. Cuando se hacen de forma correcta, habrá menos posibilidades de que los usuarios de la web exploren sus opciones con la competencia y más tendencia a volver a tu web para compras futuras.

3. Gestión optimizada del inventario

El uso de procesos de compra personalizados no solo se trata de facilitar las interacciones con los clientes, sino también de asegurarse de que tus tiendas estén adecuadamente aprovisionadas y preparadas para sus demandas. Esto va a disminuir las probabilidades de frustración del cliente causadas por la falta de producto y la reducción de los costes ya que no tendrás la necesidad realizar envíos extras de producto. Tal como se explica en el artículo de Harvard Business Review, prever la demanda es mucho más efectivo para reducir costes y determinar las cantidades de los inventarios de lo que puede ser basar el stock en el total agregado de ventas, ya que el uso de la analítica va a favorecer hacer previsiones hiperlocales que, a su vez, permitirán distribuir el stock geográficamente.

Según la investigación de Accenture, solo un tercio de los retailers actualmente ofrecen a su audiencia las funciones básicas de la omnicanalidad, como puede ser una experiencia satisfactoria en tienda y un inventario visible y accesible en múltiples canales. Considera como el recorte de costes a través del análisis predictivo puede impactar en tu empresa y de qué forma puedes satisfacer las necesidades de tu cliente creando una estrategia omnicanal del retail, capaz de conectar tu tienda online a la física.

4. Relación continua con el cliente

A los clientes les gusta sentir que las marcas les conocen de forma individual, antes, durante y después del proceso de compra. Un estudio de Rosetta Consulting indica que los clientes que están comprometidos con una marca van a finalizar sus compras un 90% de veces más que los que no están muy comprometidos. Además, el estudio también concluye que estos clientes suelen gastar un 60% más por transacción. A través del uso del análisis predictivo, el retail omnicanal ayuda a las empresas a demonstrar a sus clientes que les conocen y saben cuáles son sus necesidades. Las compras hechas en la tienda física serán reflejadas online y, a su vez, las compras online también podrán ser reflejadas en la tienda física de manera que los empleados de la tienda puedan identificar a cada cliente de forma individual y responder rápidamente a sus cuestiones, creando una relación continua y sin fricciones entre cliente y marca.

Estos tres aspectos del análisis predictivos están enfocados en mejorar la experiencia y el compromiso del cliente. Las compras predictivas crecerán en importancia en los próximos años para los retailers a medida que sus audiencias empiecen a esperar de las compañías ofertas relacionadas con su comportamiento y estas compañías sean capaces de ofrecer este tipo de servicio, a través de un estudio de las necesidades e intereses de sus clientes. Cuando se utilicen correctamente, estas funcionalidades podrán demostrar que tu marca se preocupa por la audiencia y se encuentra a la vanguardia de la tecnología en compras.

Equipa tu marca con análisis predictivo

Si bien cada una de las marcas de retail tendrá que determinar el papel que juega el análisis predictivo en su estrategia, sus efectos en cómo evoluciona y la precisión de las reacciones de cada tienda a las necesidades del cliente pueden ser de gran ayuda en este momento en que las tendencias de la industria cambian frecuentemente. Desarrollar tus sistemas de front-end y back-end en una plataforma que puede recopilar datos de los clientes y generar insights prácticos, te ayudará a comprender mejor tu base de clientes y tomar medidas efectivas lo antes posible.

Adopta la nueva era del Retail Descubre las últimas estrategias del retail

Adopta la nueva era del Retaill

Las tendencias tecnológicas del sector de retail están cambiando, pero esto no quiere decir que te debas quedar atrás. Conoce más sobre los efectos de la era digital moderna en la industria de retail y proporciona a tu equipo las herramientas necesarias para conseguir el éxito.

Descubre las últimas estrategias del retail  Rebeca Pimentel 2018-04-10T13:47:00Z
Categories: CMS, ECM

BND Instruction To Avoid

Liferay - Mon, 04/09/2018 - 23:08
Introduction

Recently I was building a fragment bundle to expose a private package per my blog entry, . In the original bnd.bnd file, I found the following:

-dsannotations-options: inherit

Not seeing this before, I had to do some research...

Inheriting References

So I think I just gave it away.

When you add the instruction to your bnd.bnd file, the class heirarchy is searched and all @Reference annotations on parent classes will be processed as if they were defined in the base class.

Normally, if you have a Foo class with an @Reference and a child Bar class, the parents references are not handled by OSGi. Instead, you need to add an @Reference annotation to the Bar class and have it call the super classes setter method (it is also why you should always use your @Reference annotations on protected setters instead of private members, because a subclass may need to set the value).

Once you add the dsannotations instruction to your bnd.bnd file, you no longer have to copy all of those @Reference annotations into the subclasses.

My first thought was that this was cool, this would save me from so much @Reference copying. Surely it would be an instruction I'd want to use like all of the time...

Avoid This Instruction

Further research led me to a discussion about supporting @Reference in inheritance found here: https://groups.google.com/forum/#!topic/bndtools-users/6oKC2e-24_E

It turns out that this can be a rather nasty implementation issue. Mainly if you split Foo and Bar to different bundles, the contexts are different. So when processing Bar in a different bundle, it has its own context, class loader, etc from the bundle that has the Foo parent class. I know that OSGi appears to be magic in how it is able to apparently cross these contexts without us as developers realizing how, but there's actually some complicated stuff going on under the hood, stuff that you and I really don't want to know too much about.

But for us to correctly and effectively use the dsannotations inheritance, we would have to know a lot more about how this context stuff worked.

Effectively, it's a can of worms, one that you really don't want to rip the lid off of.

So we need to avoid using this instruction, if for that reason alone.

A more complete response, though, comes from Felix Meschberger:

You might be pleased to hear that at the Apache Felix project we once had this feature in our annotations. From that we tried to standardize it actually.

The problem, though, is that we get a nasty coupling issue here between two implementation classes across bundle boundaries and we cannot express this dependency properly using Import-Package or Require-Capability headers.

Some problems springing to mind:

  • Generally you want to make bind/unbind methods private. Would it be ok for SCR to call the private bind method on the base class ?(It can technically be done, but would it be ok).

  • What if we have private methods but the implementor decides to change the name of the private methods — after all they are private and not part of the API surface. The consumer will fail as the bind/unbind methods are listed in the descriptors provided by/for the extension class and they still name the old method names.

  • If we don’t support private method names for that we would require these bind/unbind methods to be protected or public. And thus we force implementation-detail methods to become part of the API. Not very nice IMHO.

  • Note: package private methods don’t work as two different bundles should not share the same package with different contents.

We argued back then that it would be ok-ish to have such inheritance within a single bundle but came to the conclusion that this limitation, the explanations around it, etc. would not be worth the effort. So we dropped the feature again from the roadmap.

If I Shouldn't Use It, Why Is Liferay?

Hey, I had the same question!

It all comes down to the Liferay code base. Even though it is now OSGi-ified code, it still has a solid connection to the historical versions of the code. Blogs, for example, are now done via OSGi modules, but a large part of the code closely resembles code from the 6.x line.

The legacy Liferay code base heavily uses inheritance in addition to composition. Even for the newer Liferay implementation, there is still the heavy reliance on inheritance.

The optimal pattern for OSGi is one of composition and lighter inheritance; it's what makes the OSGi Declarative Services so powerful, I can define a new component with a higher service ranking to replace an existing component, I can wire together components to compose a dynamic solution.

Liferay's heavy use of inheritance, though, means there's a lot of parent classes that would require a heck of a lot of child class @Reference annotation copying in order to complete injection in the class heirarchy.

While there are plans to rework the code to transition to more composition and less inheritance, this will take some time to complete. Instead of forcing those changes right away and to eliminate the @Reference annotation copying, they have used the -dsannotations-options instruction to force the @Reference annotation processing in the class heirarchy. Generally this is not a problem because the inheritance is typically restricted within a single bundle, so the context change issues are not a problem, although the remainder of the points Felix raised are still a concern.

Conclusion

So now you know as much as I do about the -dsannotations-options BND instruction, why you'll see it in Liferay bundles, but more importantly why you shouldn't be using it in your own projects.

And if you are mucking with Liferay bundles, if you see the -dsannotations-options instruction, you'll now know why it is there and why you need to keep it around.

David H Nebinger 2018-04-10T04:08:33Z
Categories: CMS, ECM

Using Private Module Binaries as Dependencies

Liferay - Mon, 04/09/2018 - 20:05

When you compare a CE release with an EE release, you'll find that there are a few additional modules that are only available in EE releases. In Liferay terms, these are called "private" modules. They are private in the sense that their source code doesn't exist in any public GitHub repositories (only private ones, and usually inside of a folder named "private"), and their binaries and corresponding source code aren't published to repository.liferay.com.

From a new Liferay developer perspective, the main roadblack you might encounter with them is when you want to consume API exposed by one of those private modules, or if you want to extend one of those modules. Essentially, you run into an obstacle immediately: there are no repositories for you to use, so your build-time dependencies are never satisfied.

For a seasoned java developer, you would quickly realize that you can use Maven in order to install any JARs you need, and both Maven projects and Gradle projects would able to use those installed JARs. However, not everyone is as savvy about this sort of thing, so I thought it would be a good idea to create a blog entry to walk through the process.

Script Creation

All the JARs are present inside of the osgi/marketplace folder, buried inside of .lpkg files. So, as a first step to get at the .jar files, you might create a temporary folder (which we'll call temp), and then extract each .jar to that temporary folder.

install_lpkg() { mkdir -p temp unzip -uqq "$lpkg" -d temp rm -rf temp }

With each JAR file, the next thing you'd want to do is install it to your local Maven repository, following Apache's guide to installing 3rd party JARs. This leads you to these commands, which assume you have some sense of the artifact ID and artifact version for the JAR.

install_jar() { mvn install:install-file -Dpackaging=jar -Dfile=$1 -DgroupId=com.liferay -DartifactId=${ARTIFACT_ID} -Dversion=${VERSION} } for jar in temp/*.jar; do install_jar "$jar" done

Well, how would you uncover the artifact ID and artifact version for the JAR? Well, as outlined in our Configuring Dependencies documentation found in the Developer Guide, all of the module JARs that are included with LPKGs use the bundle symbolic name for the artifact ID and the bundle version for the artifact version. Since these are both stored in the JAR manifest, this means that once you have a .jar file, you can extract the intended artifact ID and the intended artifact version fairly easily.

ARTIFACT_ID=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-SymbolicName | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g') VERSION=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-Version | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g')

You could iterate over every .jar and do this, but all of the public .jar files exist in repository.liferay.com, with the appropriate source code (which most IDEs will auto-download). Because using the .lpkg bundle means you have no source, it's better to restrict your installation to only those modules that are private.

How do you differentiate between a private module and a public module? Well, you could just compare CE releases and EE releaes, but there's a slightly easier way to do so. It turns out that when Liferay bundles the artifact, it adds a header Liferay-Releng-Public to each artifact to indicate whether it was intended to be private or public. This means you can check using the Liferay binary itself, without crawling Liferay's public Maven repository, you can figure out which ones are not available in public repositories and limit your installation to those artifacts.

if [ "" == "$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Liferay-Releng-Public | grep -F false)" ]; then return 0 fi Script Completion

Combining all of those elements together leaves you with the following script. Simply run from the osgi/marketplace folder, and it will extract your .lpkg files, and install any non-public .jar files to your local Maven repository.

#!/bin/bash install_jar() { if [ "" == "$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Liferay-Releng-Public | grep -F false)" ]; then return 0 fi local ARTIFACT_ID=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-SymbolicName | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g') local VERSION=$(unzip -c $1 META-INF/MANIFEST.MF | grep -F Bundle-Version | sed 's/[\r\n]//g' | cut -d':' -f 2 | sed 's/ //g') mvn install:install-file -Dpackaging=jar -Dfile=$1 -DgroupId=com.liferay -DartifactId=${ARTIFACT_ID} -Dversion=${VERSION} } install_lpkg() { mkdir -p temp unzip -uqq "$1" -d temp for jar in temp/*.jar; do install_jar "$jar" done rm -rf temp } shopt -s nullglob for lpkg in *.lpkg; do install_lpkg "$lpkg" done

If you need to publish to a remote repository, simply replace mvn install:install-file with mvn deploy:deploy-file as outlined in Apache's guide to deploying 3rd party JARs to remote repository, and provide the additional parameters: the repositoryId and the url to the repository you wish to publish to.

Minhchau Dang 2018-04-10T01:05:23Z
Categories: CMS, ECM

Contact Management on-line training session - Friday, April 13th, 9 am MT

CiviCRM - Mon, 04/09/2018 - 13:39

Cividesk will be offering the Fundamentals of Contact Management, an essential on-line course for new users of CiviCRM, on Friday, April 13th, 9 to 11 am MT (start time: 8 am PT/10 am CT/11 am ET). 

Categories: CRM

CiviCRM 5.0.0 release

CiviCRM - Mon, 04/09/2018 - 08:57
Greetings, CiviCRM community!   The April release of CiviCRM is now ready to download. This is a normal monthly release, but the version numbering has changed to 5.0.0.   RELEASE NOTES: Big thanks to Andrew Hunt from AGH Strategies for putting up together release notes for this version. 
Categories: CRM

Getting hit by chargebacks? How to protect your business and bottom line

PrestaShop - Mon, 04/09/2018 - 07:20
Accepting credit cards in your online store is a great way to generate more revenue for your business, but it also brings with it an element of buyer beware, or in this case, seller beware.
Categories: E-commerce

Mobile matters.

PrestaShop - Mon, 04/09/2018 - 05:08
E-commerce Trends  
Categories: E-commerce

Overriding Component Properties

Liferay - Sat, 04/07/2018 - 10:33
Introduction

So once you've been doing some Liferay OSGi development, you'll recognize your component properties stanza, most commonly applied to a typical portlet class:

@Component( immediate = true, property = { "com.liferay.portlet.add-default-resource=true", "com.liferay.portlet.display-category=category.hidden", "com.liferay.portlet.layout-cacheable=true", "com.liferay.portlet.private-request-attributes=false", "com.liferay.portlet.private-session-attributes=false", "com.liferay.portlet.render-weight=50", "com.liferay.portlet.use-default-template=true", "javax.portlet.display-name=my-controlpanel Portlet", "javax.portlet.expiration-cache=0", "javax.portlet.init-param.template-path=/", "javax.portlet.init-param.view-template=/view.jsp", "javax.portlet.name=" + MyControlPanelPortletKeys.MyControlPanel, "javax.portlet.resource-bundle=content.Language", "javax.portlet.security-role-ref=power-user,user", "javax.portlet.supports.mime-type=text/html" }, service = Portlet.class ) public class MyControlPanelPortlet extends MVCPortlet { }

This is the typical thing you get when you use the Blade tools' "panel-app" template.

This is well and good, you're in development and you can edit these as you need to add, remove or change values.

But what can you do with the OOTB Liferay components, the ones that are compiled into classes packaged into a jar which is packaged into an LPKG file in the osgi/marketplace folder?

Overriding Component Properties

So actually this is quite easy to do. Before I show you how, though, I want to show what is actually going on...

So the "property" stanza or the lesser-used "properties" one (this one is used to identify a file to use for component properties), these are actually managed by the OSGi Configuration Admin service. Because it is managed by CA, we actually get a slew of functionality without even knowing about it.

The @Activate and @Modified annotations that allow you to pass in the properties map? CA is participating in that.

The @Reference annotation target filters referring to property values? CA is participating in that.

Just like CA is at the core of all of the configuration interfaces and Liferay's ConfigurationProviderUtil to fetch a particular instance, these properties can be accessed in code in a similar way.

The other thing that CA brings us, the thing we're going to take advantage of here, is that CA can use override files w/ custom property adds/updates (sorry, no deletes).

Lets say my sample class is actually com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet. To override the properties, I just have to create an osgi/configs/com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet.cfg file. Note the importance of a) the location where the file goes, b) the file name is the full package/class name and c) the file has either the .cfg or .config extension and conforms to the appropriate CA formatting for the type.

The .cfg format is the simplest of the two, it actually follows a standard property file format. So if I wanted to override the category, to expose this portlet so it can be dropped on a page, I could put the following in my osgi/configs/com.dnebinger.control.panel.test.internal.portlet.MyControlPanelPortlet.cfg file:

com.liferay.portlet.display-category=category.sample

That's all there is to it. CA will use this override when collecting the component properties and, when Liferay is processing it, will treat this portlet as though it is in the Sample category and allow you to drop it on a page.

In a similar way you can add new properties, but the caveat is that the code must support them. For example, the MyControlPanelPortlet is not instanceable; I could put the following into my .cfg file:

com.liferay.portlet.instanceable=true

I'm adding a new property, one that is not in the original set of properties, but I know the code supports it and will make the portlet instanceable.

Conclusion

Using this same technique, you can override the properties for any OOTB Liferay component, including portlets, action classes, etc.

Just be sure to put the file into the osgi/configs folder, name the file correctly using full path/class, and use the .cfg or .config extensions with the correct format.

You can find out more about the .config format here: https://dev.liferay.com/discover/portal/-/knowledge_base/7-0/understanding-system-configuration-files

David H Nebinger 2018-04-07T15:33:43Z
Categories: CMS, ECM

Liferay and Docker: Upgrade Liferay to 7.0 GA6

Liferay - Sat, 04/07/2018 - 07:40

Liferay Portal 7.0 CE GA6 Release was announced about 2 weeks ago and Liferay containerisers may desire to upgrade their Docker container to the new Liferay version. Well, this is a not so hard task to accomplish, but some steps can be not so obvious the first time one faces them. This is the reason behind this little guide on how to migrate from GA5 to GA6 inside a Docker container.

Local environment update

The first step is to migrate the local development environment to the next Liferay version. This phase is equal for both normal and containerised workspaces. In order to update the local environment,it's necessary to:

  • Update the liferay.workspace.bundle.url property inside gradle.properties file to
    liferay.workspace.bundle.url=https://cdn.lfrs.sl/releases.liferay.com/portal/7.0.5-ga6/liferay-ce-portal-tomcat-7.0-ga6-20180320170724974.zip
  • Run the bundle/initBundle gradle task
Docker container update

Now that the development workspace has been migrated, it's necessary to update the Liferay Docker container. The liferay.home path on the new container may differ from the path inside the GA5 container. For the sake of convenience, GA6_LIFERAY_HOME variable will be used to refer to the liferay.home path on the new container, while GA5_LIFERAY_HOME refers to the liferay.home path inside the old container. For Liferay containers inside my Github repo, the two liferay.home paths are the following

GA5_LIFERAY_HOME=/usr/local/liferay-ce-portal-7.0-ga5 GA6_LIFERAY_HOME=/usr/local/liferay-ce-portal-7.0-ga6

In order to update the Docker container, it's necessary to:

  • Change the Docker image inside docker-compose.yml file
    image: glassofwhiskey/liferay-portal:7.0-ce-ga6-dev
  • Update all portal container volumes inside docker-compose.yml so that they stop pointing to GA5_LIFERAY_HOME, but point to GA6_LIFERAY_HOME instead
    volumes: - liferay-document-library:GA6_LIFERAY_HOME/data/document_library - ${LIFERAY_BUNDLE_DIR}/osgi/configs:GA6_LIFERAY_HOME/osgi/configs - ${LIFERAY_BUNDLE_DIR}/portal-ext-properties:GA6_LIFERAY_HOME/portal-ext.properties ...
  • Update the liferay.home property inside portal-ext.properties file to point to GA6_LIFERAY_HOME and copy the updated file inside the bundles folder.
    liferay.home=GA6_LIFERAY_HOME
Database upgrade

The brand new Liferay container it's almost ready now, but a last step is still missing. Indeed, launching startDockerEnv will result in an exception thrown during the server startup phase: you need to upgrade your DB first!!!

This is the not so obvious task of a containerised upgrade. Normally, it would be enough to open a shell inside your bundles/tools/portal-tools-db-upgrade-client folder and type the following command

java -jar com.liferay.portal.tools.db.upgrade.client.jar

But this will not work so well in a Dockerised architecture, with Liferay and DB running inside containers. In such case, it's necessary to run the aforementioned command from inside the Liferay container. Nevertheless, in order to be able to do that, the docker-compose.yml file must be modified a bit:

  • As a first thing, it's necessary to see the bundles/tools folder inside the container. The first step is therefore to add a new bind mount to the portal container
    volumes: ... ${LIFERAY_BUNDLE_DIR}/tools:GA6_LIFERAY_HOME/tools
  • Then it's necessary to be able to execute the upgrade client inside the container. Therefore, Liferay must not start automatically when the startDockerEnv task is invoked. What is needed instead is a Liferay container that hangs forever doing nothing, so that the upgrade client can execute its tasks undisturbed. In order to achieve this goal, the following line should be added to the docker-compose.yml file
    entrypoint: "tail -f /dev/null"

Now it's time to execute the startDockerEnv task, wait for the containers to start and run the following command to execute the DB upgrade client from inside the Liferay container, where LIFERAY_CONTAINER_NAME is the name of the Liferay Docker container.

docker exec -it LIFERAY_CONTAINER_NAME java -jar GA6_LIFERAY_HOME/tools/portal-tools-db-upgrade-client/com.liferay.portal.tools.db.upgrade.client.jar

This command will start an interactive shell with the DB upgrade client. From now on, all informations reported here are perfectly valid and the upgrade process can be completed as usual.

Under some conditions, the DB upgrade client may thow a file permissions related exception at the end of the configuration phase. In such case, it's necessary to run the previous command as the root user. In order to accomplish that, this modified version of the command must be used:

docker exec -it --user="root" LIFERAY_CONTAINER_NAME java -jar GA6_LIFERAY_HOME/tools/portal-tools-db-upgrade-client/com.liferay.portal.tools.db.upgrade.client.jar Conclusions

And here it is. After the upgrade process completion, Liferay DB will be ready to support the GA6 portal. All that remains to do is to run the stopDockerEnv task, remove the additional lines from the docker-compose.yml file and restart the whole thing. Et voila! A fully upgraded GA6 containerised development environment is ready to be explored.

If you face some issues during the upgrade process, please don't be afraid to report them hereunder: I (or someone from the Liferay containerisers community) will try to help you.

Happy Liferay upgrading!!!

Iacopo Colonnelli 2018-04-07T12:40:28Z
Categories: CMS, ECM
Syndicate content