AdoptOS

Assistance with Open Source adoption

Open Source News

Open Source Initiative Announces New Partnership With Adblock Plus

Open Source Initiative - Tue, 01/16/2018 - 12:03

PALO ALTO, Calif. - Jan. 16, 2018 -- Adblock Plus, the most popular Internet ad blocker today, joins The Open Source Initiative® (OSI) as corporate sponsors. Since its very first version, Adblock Plus has been an open source project that has developed into a successful business with over 100 million users worldwide. As such, the German company behind it, eyeo GmbH, has decided it is time to give back to the open source community.

Founded in 1998, the OSI protects and promotes open source software, development and communities, championing software freedom in society through education, collaboration, and infrastructure. Adblock Plus is an open source project that aims to rid the Internet of annoying and intrusive online advertising. Its free web browser extensions (add-ons) put users in control by letting them block or filter which ads they want to see.

Commenting on the partnership Patrick Masson, General Manager at the OSI said, "We're very excited to welcome Adblock Plus to the OSI's growing list of sponsors. Adblock Plus and eyeo demonstrate how open source software can not only support business but actually drive business — two important lessons we here at the OSI have been promoting for nearly 20 years."

"With transparency being of utmost importance to us, Adblock Plus has been an open source project from the very start " said Wladimir Palant, eyeo founder & original developer. "This allowed us to build a loyal community around the project, with volunteer contributions helping the project to grow and thrive. We appreciate the work done by our community and will continue investing efforts into keeping Adblock Plus a truly open project where everybody can contribute"

Till Faida, founder and CEO of eyeo adds: "I am proud that we have built a successful company based on open source software. We are convinced that being open is key to innovation, so for us it is a mission and a business case. Today, eyeo has more than 100 employees all around the world, producing and running open software, wherever possible. With Adblock Plus we want to contribute to a sustainable, fair and open web for creators and consumers. So it is only logical to provide our products as open source."

Adblock Plus joins a broad range of well-known technology and software companies that all started as open source projects and matured into open source businesses. Now they are contributing back to the broader open source community as OSI sponsors and supporters.

About Adblock Plus

Adblock Plus (https://adblockplus.org/) is an open source project that aims to rid the Internet of annoying and intrusive online advertising. Its free web browser extension (add-ons) puts users in control by letting them block or filter which ads they want to see. Users across the world have downloaded Adblock Plus over 1 billion times, and it has remained the most downloaded and the most used extension almost continuously since November 2006. PC Magazine named the extension as one of the best free Google Chrome extensions, and it received About.com readers' choice award for best privacy/security add-on. Adblock Plus is a free browser add-on for Safari, Chrome, Firefox, Internet Explorer, Maxthon and Opera for desktop users, and offers a free browser for mobile users on iOS and Android.

Follow Adblock Plus on Twitter at @AdblockPlus and read our blogs at adblockplus.org/blog/. Media press kit with FAQ, images and company statistics is available at: eyeo.com/en/press.

Adblock Plus Media Contact
Laura Dornheim
laura(a)adblockplus.org
+49 172 8903504
@schwarzblond

About The Open Source Initiative

Founded in 1998, the Open Source Initiative (OSI) protects and promotes open source software, development and communities, championing software freedom in society through education, collaboration, and infrastructure, stewarding the Open Source Definition, and preventing abuse of the ideals and ethos inherent to the open source movement. The OSI is a California public benefit corporation, with 501(c)(3) tax-exempt status. For more information about the OSI, see https://opensource.org.

Follow the OSI on Twitter at @opensourceorg, and read our blogs at opensource.org/news.

OSI Media Contact
Italo Vignoli
italo(a)opensource.org

Categories: Open Source

Happy seventeenth birthday Drupal

Drupal - Mon, 01/15/2018 - 17:30

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Seventeen years ago today, I open-sourced the software behind Drop.org and released Drupal 1.0.0. When Drupal was first founded, Google was in its infancy, the mobile web didn't exist, and JavaScript was a very unpopular word among developers.

Over the course of the past seventeen years, I've witnessed the nature of the web change and countless internet trends come and go. As we celebrate Drupal's birthday, I'm proud to say it's one of the few content management systems that has stayed relevant for this long.

While the course of my career has evolved, Drupal has always remained a constant. It's what inspires me every day, and the impact that Drupal continues to make energizes me. Millions of people around the globe depend on Drupal to deliver their business, mission and purpose. Looking at the Drupal users in the video below gives me goosebumps.

Drupal's success is not only marked by the organizations it supports, but also by our community that makes the project more than just the software. While there were hurdles in 2017, there were plenty of milestones, too:

  • At least 190,000 sites running Drupal 8, up from 105,000 sites in January 2016 (80% year over year growth)
  • 1,597 stable modules for Drupal 8, up from 810 in January 2016 (95% year over year growth)
  • 4,941 DrupalCon attendees in 2017
  • 41 DrupalCamps held in 16 different countries in the world
  • 7,240 individual code contributors, a 28% increase compared to 2016
  • 889 organizations that contributed code, a 26% increase compared to 2016
  • 13+ million visitors to Drupal.org in 2017
  • 76,374 instance hours for running automated tests (the equivalent of almost 9 years of continuous testing in one year)

Since Drupal 1.0.0 was released, our community's ability to challenge the status quo, embrace evolution and remain resilient has never faltered. 2018 will be a big year for Drupal as we will continue to tackle important initiatives that not only improve Drupal's ease of use and maintenance, but also to propel Drupal into new markets. No matter the challenge, I'm confident that the spirit and passion of our community will continue to grow Drupal for many birthdays to come.

Tonight, we're going to celebrate Drupal's birthday with a warm skillet chocolate chip cookie topped with vanilla ice cream. Drupal loves chocolate! ;-)

Note: The video was created by Acquia, but it is freely available for anyone to use when selling or promoting Drupal.

Categories: CMS

Contributing to CiviCRM

CiviCRM - Mon, 01/15/2018 - 13:51

Last year, we overhauled the CiviCRM contributor program, allowing individuals to submit details about their contributions online as well combined the contributor and partner listing. So far this year, we’ve seen an increase in contributions being logged as well as a new revision to site that improves visibility of contributions. We thought it’d be a good time to recap where the program is at, answer a few common questions, and highlight where we see it going.

Categories: CRM

Corso di Text Mining con KNIME Analytics Platform, Roma / Text Mining Course with KNIME Analytics Platform

Knime - Mon, 01/15/2018 - 09:51
Corso di Text Mining con KNIME Analytics Platform, Roma / Text Mining Course with KNIME Analytics Platform oyasnev Mon, 01/15/2018 - 15:51 February 7, 2018 Roma

Data del Corso: 07/02/2018 alle 9:00

Luogo: LUISS ENLABS, via Marsala 29h, 00185 Roma

Durata: dalle 9:00 alle 5:00

Obiettivi: Imparare i concetti base del Text Processing e le possibili applicazioni attraverso l’uso di tecniche di Data e Text Mining.

A chi è rivolto: Analisti di business, ricercatori di mercato, statistici che devono utilizzare dati testuali non strutturati nelle loro analisi.

Requisiti: Avere una conoscenza di base del software KNIME e dell’ambiente operativo nel quale si opera. Avere una conoscenza degli strumenti statistici di base.

Iscriviti adesso!

Register Now

Argomenti Trattati
  • Importazione di testi da fonti diverse;
  • Pulizia e arricchimento semantico;
  • Preparazione del testo per l’utilizzo di strumenti analitici;
  • Estrazione e visualizzazione delle parole chiave
  • Estrazione automatica degli argomenti
  • Raggruppamento e classificazione automatica dei testi.
Informazioni aggiuntive

Si richiede a ciascuno dei partecipanti di portare con sé il PC portatile. Inoltre, si consiglia di installare preventivamente il software KNIME Analytics Platform. Al fine di installare correttamente il software, si consiglia di prendere visione dei seguenti video:

Prezzo
  • Text Mining con KNIME Analytics Platform: 750,00 €

Registrandosi al corso “Introduzione al software KNIME Analytics Platform” si potrá usufruire di uno sconto pari al 20% del costo dell’intero corso, che verrà applicato al momento del saldo. Il costo totale per i due corsi sará quindi 1250,00 €.

Categories: BI

Corso di Introduzione al software KNIME Analytics Platform, Roma / Course for KNIME Analytics Platform

Knime - Mon, 01/15/2018 - 09:35
Corso di Introduzione al software KNIME Analytics Platform, Roma / Course for KNIME Analytics Platform oyasnev Mon, 01/15/2018 - 15:35 February 6, 2018 Roma

Data del Corso: 06/02/2018 alle 9:00

Luogo: LUISS ENLABS, via Marsala 29h, 00185 Roma

Durata: dalle 9:00 alle 5:00

Obiettivi: Fornire gli strumenti per navigare nell'ambiente KNIME Analytics Platform, trattare vari tipi di dati, ottenere statistiche e grafici e applicare tecniche di analisi dei dati.

A chi è rivolto: A differenti figure professionali interessate ad apprendere l’utilizzo del software KNIME Analytics Platform.

Requisiti: Essere in grado di comprendere le strutture di file e di accedervi sul proprio sistema operativo. Avere una conoscenza degli strumenti statistici di base.

Iscriviti adesso!

Register Now

Argomenti Trattati
  • Introduzione a KNIME Analytics Platform;
  • Accesso a dati esterni e loro trasformazione;
  • Trasformazione dei dati: unione di tabelle, filtraggio su righe e su colonne, creazione di nuove colonne, aggregazioni, e altre simili operazioni;
  • Produzione di statistiche e grafici;
  • Utilizzo di tecniche di Machine Learning;
  • Integrazione di strumenti esterni;
  • Esportazione dei risultati e passaggio in ambiente di produzione.
Informazioni aggiuntive

Si richiede a ciascuno dei partecipanti di portare con sé il PC portatile. Inoltre, si consiglia di installare preventivamente il software KNIME Analytics Platform. Al fine di installare correttamente il software, si consiglia di prendere visione dei seguenti video:

Prezzo
  • Introduzione al software KNIME Analytics Platform: 750,00 €

Registrandosi al corso “Text Mining con KNIME Analytics Platform” si potrá usufruire di uno sconto pari al 20% del costo dell’intero corso, che verrà applicato al momento del saldo. Il costo totale per i due corsi sará quindi 1250,00 €.

Categories: BI

How to decouple Drupal in 2018

Drupal - Fri, 01/12/2018 - 13:19

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

In this post, I'm providing some guidance on how and when to decouple Drupal.

Almost two years ago, I had written a blog post called "How should you decouple Drupal?". Many people have found the flowchart in that post to be useful in their decision-making on how to approach their Drupal architectures. Since that point, Drupal, its community, and the surrounding market have evolved, and the original flowchart needs a big update.

Drupal's API-first initiative has introduced new capabilities, and we've seen the advent of the Waterwheel ecosystem and API-first distributions like Reservoir, Headless Lightning, and Contenta. More developers both inside and outside the Drupal community are experimenting with Node.js and adopting fully decoupled architectures.

Let's start with the new flowchart in full:

All the ways to decouple Drupal

The traditional approach to Drupal architecture, also referred to as coupled Drupal, is a monolithic implementation where Drupal maintains control over all front-end and back-end concerns. This is Drupal as we've known it — ideal for traditional websites. If you're a content creator, keeping Drupal in its coupled form is the optimal approach, especially if you want to achieve a fast time to market without as much reliance on front-end developers. But traditional Drupal 8 also remains a great approach for developers who love Drupal 8 and want it to own the entire stack.

A second approach, progressively decoupled Drupal, offers an approach that strikes a balance between editorial needs like layout management and developer desires to use more JavaScript, by interpolating a JavaScript framework into the Drupal front end. Progressive decoupling is in fact a spectrum, whether it is Drupal only rendering the page's shell and populating initial data — or JavaScript only controlling explicitly delineated sections of the page. Progressively decoupled Drupal hasn't taken the world by storm, likely because it's a mixture of both JavaScript and PHP and doesn't take advantage of server-side rendering via Node.js. Nonetheless, it's an attractive approach because it makes more compromises and offers features important to both editors and developers.

Last but not least, fully decoupled Drupal has gained more attention in recent years as the growth of JavaScript continues with no signs of slowing down. This involves a complete separation of concerns between the structure of your content and its presentation. In short, it's like treating your web experience as just another application that needs to be served content. Even though it results in a loss of some out-of-the-box CMS functionality such as in-place editing or content preview, it's been popular because of the freedom and control it offers front-end developers.

What do you intend to build?

The most important question to ask is what you are trying to build.

  1. If your plan is to create a single standalone website or web application, decoupling Drupal may or may not be the right choice based on the must-have features your developers and editors are asking for.
  2. If your plan is to create multiple experiences (including web, native mobile, IoT, etc.), you can use Drupal to provide web service APIs that serve content to other experiences, either as (a) a content repository with no public-facing component or (b) a traditional website that is also a content repository at the same time.

Ultimately, your needs will determine the usefulness of decoupled Drupal for your use case. There is no technical reason to decouple if you're building a standalone website that needs editorial capabilities, but that doesn't mean people don't prefer to decouple because of their preference for JavaScript over PHP. Nonetheless, you need to pay close attention to the needs of your editors and ensure you aren't removing crucial features by using a decoupled approach. By the same token, you can't avoid decoupling Drupal if you're using it as a content repository for IoT or native applications. The next part of the flowchart will help you weigh those trade-offs.

Today, Drupal makes it much easier to build applications consuming decoupled Drupal. Even if you're using Drupal as a content repository to serve content to other applications, well-understood specifications like JSON API, GraphQL, OpenAPI, and CouchDB significantly lower its learning curve and open the door to tooling ecosystems provided by the communities who wrote those standards. In addition, there are now API-first distributions optimized to serve as content repositories and SDKs like Waterwheel.js that help developers "speak" Drupal.

Are there things you can't live without?

Perhaps most critical to any decision to decouple Drupal is the must-have feature set desired for both editors and developers. In order to determine whether you should use a decoupled Drupal, it's important to isolate which features are most valuable for your editors and developers. Unfortunately, there is are no black-and-white answers here; every project will have to weigh the different pros and cons.

For example, many marketing teams choose a CMS because they want to create landing pages, and a CMS gives them the ability to lay out content on a page, quickly reorganize a page and more. The ability to do all this without the aid of a developer can make or break a CMS in marketers' eyes. Similarly, many digital marketers value the option to edit content in the context of its preview and to do so across various workflow states. These kind of features typically get lost in a fully decoupled setting where Drupal does not exert control over the front end.

On the other hand, the need for control over the visual presentation of content can hinder developers who want to craft nuanced interactions or build user experiences in a particular way. Moreover, developer teams often want to use the latest and greatest technologies, and JavaScript is no exception. Nowadays, more JavaScript developers are including modern techniques, like server-side rendering and ES6 transpilation, in their toolboxes, and this is something decision-makers should take into account as well.

How you reconcile this tension between developers' needs and editors' requirements will dictate which approach you choose. For teams that have an entirely editorial focus and lack developer resources — or whose needs are focused on the ability to edit, place, and preview content in context — decoupling Drupal will remove all of the critical linkages within Drupal that allow editors to make such visual changes. But for teams with developers itching to have more flexibility and who don't need to cater to editors or marketers, fully decoupled Drupal can be freeing and allow developers to explore new paradigms in the industry — with the caveat that many of those features that editors value are now unavailable.

What will the future hold?

In the future, and in light of the rapid evolution of decoupled Drupal, my hope is that Drupal keeps shrinking the gap between developers and editors. After all, this was the original goal of the CMS in the first place: to help content authors write and assemble their own websites. Drupal's history has always been a balancing act between editorial needs and developers' needs, even as the number of experiences driven by Drupal grows.

I believe the next big hurdle is how to begin enabling marketers to administer all of the other channels appearing now and in the future with as much ease as they manage websites in Drupal today. In an ideal future, a content creator can build a content model once, preview content on every channel, and use familiar tools to edit and place content, regardless of whether the channel in question is mobile, chatbots, digital signs, or even augmented reality.

Today, developers are beginning to use Drupal not just as a content repository for their various applications but also as a means to create custom editorial interfaces. It's my hope that we'll see more experimentation around conceiving new editorial interfaces that help give content creators the control they need over a growing number of channels. At that point, I'm sure we'll need another new flowchart.

Conclusion

Thankfully, Drupal is in the right place at the right time. We've anticipated the new world of decoupled CMS architectures with web services in Drupal 8 and older contributed modules. More recently, API-first distributions, SDKs, and even reference applications in Ember and React are giving developers who have never heard of Drupal the tools to interact with it in unprecedented ways.

Unlike many other content management systems, old and new, Drupal provides a spectrum of architectural possibilities tuned to the diverse needs of different organizations. This flexibility between fully decoupling Drupal, progressively decoupling it, and traditional Drupal — in addition to each solution's proven robustness in the wild — gives teams the ability to make an educated decision about the best approach for them. This optionality sets Drupal apart from new headless content management systems and most SaaS platforms, and it also shows Drupal's maturity as a decoupled CMS over WordPress. In other words, it doesn't matter what the team looks like or what the project's requirements are; Drupal has the answer.

Special thanks to Preston So for contributions to this blog post and to Alex Bronstein, Angie Byron, Gabe Sullice, Samuel Mortenson, Ted Bowman and Wim Leers for their feedback during the writing process.

Categories: CMS

Adding custom Layout type

Liferay - Fri, 01/12/2018 - 03:47

Liferay has different types of pages. The default one is Layout - it’s standard, empty by default page, which is displayed in navigation menu. Portlets may be added to such page, themes and layouts may be also applied to this type of page. In most cases this page type is used.

But there are different page types, here are they:

  • Link to a Page of This Site - as the name says, this is a link to some page within site. Page of this type doesn’t have it’s own content, it’s used to redirect to some other page within the same site by clicking on this page in navigation menu
  • Link to URL - this is similar to previous type, but may refer to a page of some other site within portal, or even to some external URL
  • Panel - this type of page is used to work with different portlets on one and the same page. When you set page type to Panel in Site Pages menu, you can specify which applications (portlets) will be available on this panel page
  • Embedded - it’s a Liferay page, which contains IFrame, which displays content from specified URL. When you select Embedded page type, you may specify URL to display.

For most cases these page types are quite enogh for portal development. But sometimes there is a need to create a custom type of layout - for example, if you need to create layout as a link to some custom friendly URL, like products category. 

Fortunatelly, Liferay provides great opportunity to extend this funcitonality. See in original Liferay's portal.properties:

# # Set the list of layout types. The display text of each of the layout types # is set in content/Language.properties and prefixed with "layout.types.". # # You can create new layout types and specify custom settings for each # layout type. End users input dynamic values as designed in the edit page. # End users see the layout as designed in the view page. The generated # URL can reference properties set in the edit page. Parentable layouts # can contain child layouts. You can also specify a comma delimited list of # configuration actions that will be called for your layout when it is # updated or deleted. # layout.types=portlet,panel,embedded,url,link_to_layout

Let's create custom "link_to_category" layout type (similar to Liferay's link_to_layout). For this you need to overwrite 'layout.types' property in the portal.properties file of the hook module:

layout.types=portlet,panel,embedded,url,link_to_layout,link_to_category

and also specify the setting for new layout type:

layout.edit.page[link_to_category]=/portal/layout/edit/link_to_category.jsp layout.view.page[link_to_category]=/portal/layout/edit/link_to_category.jsp layout.url.friendliable[link_to_category]=false layout.parentable[link_to_category]=false layout.sitemapable[link_to_category]=false 

In the Language.properties hook set the name and description of created layout type:

layout.types.link_to_category=Link to Category layout.types.link_to_category.description=Link to Category

In the specified JSP file (link_to_category.jsp) add required content for editing layout functionality. In my case it's the following:

<%@ page import="com.liferay.service.util.CategoryServiceUtils" %> <%@ page import="com.liferay.service.util.Category" %> <%@ include file="/html/portal/layout/edit/init.jsp" %> <% Set<Category> categories = CategoryServiceUtils.getCategoryTrees(); long categoryId = selLayout != null ? GetterUtil.getLong(selLayout.getTypeSettingsProperty("categoryId")) : 0; %> <aui:select label="link-to-category" name="TypeSettingsProperties--categoryId--" showEmptyOption="<%= true %>"> <c:forEach var="category" items="${categories}"> <aui:option label="${category.name}" value="${category.categoryId}" selected="${category.categoryId eq categoryId}"/> </c:forEach>  </aui:select> 

After deploying hook with the changes above - you'll see the new layout type in Manage Pages section during adding/editing layout.  

After saving such layout custom 'categoryId' field will be saved into layout's typeSettings. Then it can be used within theme for displaying Layout in a different way.

Note 1: the "name" property of field should start with "TypeSettingProperties--" and end with "--" to be saved into Layout's TypeSettings. See in com.liferay.portlet.layoutsadmin.action.EditLayoutsAction#updateLayout:

UnicodeProperties typeSettingsProperties = PropertiesParamUtil.getProperties( actionRequest, "TypeSettingsProperties--");

Note 2: fields should be with namespaces. Either use AUI components, like in above example (as they generate them automatically), or add them manually to your components name, like:

<select name="_156_TypeSettingsProperties--categoryId--"> <%-- ... --%> </select>

, where 156 is GRUOP_PAGES portletID, see in com.liferay.portal.util.PortletKeys:

public static final String GROUP_PAGES = "156";

But it's better no not hard-code 

Hope, this will be helpful for you.

 

Regards,

Vitaliy

 

P.S. Read my Liferay book at https://www.aimprosoft.com/aimprosoft-products/liferay-book/

Vitaliy Koshelenko 2018-01-12T08:47:46Z
Categories: CMS, ECM

What You Need to Know About GDPR

Liferay - Thu, 01/11/2018 - 20:46

Data is the lifeblood that powers modern businesses and today’s digital transformation. More data allows smarter analytics, which drives better products, which draws more users, which ultimately creates even more data. This concept is known as data network effects: the more user data companies have, the smarter its products and services become.

As such, businesses now face a powerful incentive to maximize data collection. In an ideal world, this effect would be a win-win scenario for both companies and individuals. However, recent history shows that the relentless pursuit of data without regard for users’ right to privacy can result in everything from data breaches to hidden software trackers.

In an effort to safeguard user privacy, the General Data Protection Regulation (GDPR) will go into effect on May 25, 2018, and will be directly binding law for all European Member States.

This legislation is meant to raise the level of personal data protection for European residents and to consolidate this legislation among EU member states. Consequently, companies will likely need to modernize their approach to data protection to comply with the new rules. The regulation is based on the premise that personal data ultimately belongs to the person and gives individuals greater control over how companies use their data, including legal rights to access and delete their personal data and requiring explicit consent to process their data if not already permitted by law.

However, most companies across all industries are not ready for the changes the GDPR will bring to the way business is done. Among UK corporations surveyed by PWC, only 8% have finished preparations, while 34% have just begun preparations and 5% have not started at all. It is vital that every company understands the coming impact of GDPR so they can properly prepare before the regulation goes into effect.

The following crucial information on GDPR will help you better understand its guiding principles, the cost of violation, important rules for every business to follow and how Liferay DXP complies with these new regulations. With the following knowledge, your company can take action and be prepared for the start of GDPR.

The Principle Behind GDPR

The 99 articles and 173 recitals of the GDPR are designed to curb potential personal data abuses. Recital 1 encapsulates the overarching reason behind all of the regulations:

“The protection of natural persons in relation to the processing of personal data is a fundamental right.”

While the protections that come from GDPR may require businesses to restructure their practices, this fundamental principle should be adopted by all businesses due to its focus on the rights of all people. Though it’s tempting to approach compliance with a simple checklist mentality, the question is not so black and white. The regulation is intentionally broad and its specific application will depend on the scale and sensitivity of the personal data each business processes. Businesses will need to take a risk-based approach in evaluating how to comply with the regulations.

No amount of money or effort can ever fully guarantee the safety of a company’s data, but every company should determine the appropriate investment needed to adequately protect its users’ data.

The Cost of GDPR

In the event of a company violating the rules of GDPR, the responsible supervisory authority in the EU Member State can fine data controllers up to 4% of the controller’s global annual revenue of the previous fiscal year or EUR 20 million, whichever is higher. The company can also be required to pay compensation for damages to individuals. Violations will be made public, which could incur even greater costs resulting from a loss of customer trust.

In addition, the PWC study found that among companies that have finished GDPR preparations, more than 88% spent over $1 million on their effort, and 40% spent over $10 million, according to their own estimations. The regulations that come with this new legislation mean that all aspects of a company must work together to prepare and ensure all their processes are compliant, leading to these costs.

The Practice of GDPR

The GDPR outlines regulations for a broad range of practices dealing with personal data, including special rules for processing children’s data, transferring data to other countries, automated decision-making, dealing with data breaches and more. Every company should assess how the GDPR applies to its particular use case. Below are some examples of broader rules that apply to most, if not all, businesses.

  • Lawfulness, fairness, and transparency (article 5(1)(a)) requires businesses to be forthright and transparent about what personal data is being collected and why. Gone are the days of hiding behind pages of legalese in end-user license agreements to acquire consent to collect data. Businesses must make it abundantly clear exactly how and why they process a user’s personal data.
  • Data minimization (article 5(1)(c)) is a principle that contrasts with some modern businesses’ adoption of a “data maximization” mindset. Businesses sometimes collect personal data without a clear purpose for how the data will be used. In this era of big data, AI and machine learning, the driving philosophy is to collect everything now, in case it proves useful later. Under the GDPR, businesses must only collect the minimum amount of data necessary to fulfill its intended purpose.
  • Explicit consent (article 6(1)(a)) from the individual is legally required to process non-essential personal data. This will particularly affect marketers that want to collect information to target users with marketing material. In the words of the regulation, consent must be “freely given, specific, informed and unambiguous.” This means practices like selling email lists to third parties without user consent and pre-ticked checkboxes for email newsletters are no longer permitted.
  • The right to erasure (article 17) empowers individuals with the right to request businesses erase all personal data from their systems, given the business is not legally required to keep the data (like bank records). Also known as the “right to be forgotten,” this requirement can be fulfilled through deletion or anonymization, but the bar for proper anonymization is high.
  • The right to data portability (article 20) similarly gives individuals the right to request businesses export all personal data from their systems. This prevents vendor lock-in where individuals are unable to choose a competing service due to the magnitude and complexity of personal data with a particular business.

These are just a few regulations outlined in the GDPR. Businesses should not underestimate the scope of the GDPR and should conduct thorough assessments of both the regulation and its impact on their existing operations.

What Is Liferay DXP Doing for GDPR?

Compliance will look different for every business. A hospital’s patient records must be handled differently from employee intranet profiles. Liferay is committed to delivering flexible products that can be customized to support your company’s strategy for protecting your end users’ privacy.

Liferay Digital Experience Platform supports robust data protection and security capabilities to accelerate your journey toward GDPR compliance. This includes out-of-the-box user management features, powerful search for discovering data, flexible taxonomy for classifying data, a granular permissioning system and a highly customizable framework. Future capabilities will include the ability to directly manage data portability and data erasure/anonymization needs for users within Liferay systems.

Prepare for GDPR with Liferay

Learn how Liferay projects can comply with personal data protection requirements to work within GDPR regulations while still providing needed services.

Read “Data Protection for Liferay Services and Software”   Dennis Ju 2018-01-12T01:46:42Z
Categories: CMS, ECM

Adding custom classes to Theme Velocity context

Liferay - Thu, 01/11/2018 - 08:56

To make some custom class available inside a theme velocity template, you need to put it's instance into com.liferay.portal.kernel.util.WebKeys#VM_VARIABLES request attribute. And it will be populated to the velocity context automatically (in com.liferay.portal.velocity.VelocityTemplateContextHelper#prepare method), see:

// Insert custom vm variables Map<String, Object> vmVariables = (Map<String, Object>)request.getAttribute(WebKeys.VM_VARIABLES); if (vmVariables != null) { for (Map.Entry<String, Object> entry : vmVariables.entrySet()) { String key = entry.getKey(); Object value = entry.getValue(); if (Validator.isNotNull(key)) { template.put(key, value); } } }

To achieve this, you can use custom service pre action:

servlet.service.events.pre=com.comany_name.portal.events.VMPopulateServicePreAction

Inside this action you can put required objects into WebKeys#VM_VARIABLES request attribute map:

public class VMPopulateServicePreAction extends Action { @Override public void run(HttpServletRequest request, HttpServletResponse response) { Map vmVariablesMap = (Map) request.getAttribute(WebKeys.VM_VARIABLES); if (vmVariablesMap == null) { vmVariablesMap = new HashMap<String, Object>(); } vmVariablesMap.put("categoryService", new CategoryServiceImpl()); request.setAttribute(WebKeys.VM_VARIABLES, vmVariablesMap); } }

And now you can use those object in your theme, like:

#set ($categoryNav=$categoryService.getCategoryNav($categoryId)) //...

Hope, this will help 

Vitaliy

Vitaliy Koshelenko 2018-01-11T13:56:19Z
Categories: CMS, ECM

What is Marketing Automation, and why, for Turing&#039;s sake, should I use it?

PrestaShop - Thu, 01/11/2018 - 08:52
The Internet has almost completely penetrated our daily lives. As much as a dozen years ago it was still a novelty, to which we approached very carefully.
Categories: E-commerce

CiviHR - looking back at 2017 and forward to 2018!

CiviCRM - Thu, 01/11/2018 - 07:18
Howdy folks,   It's been a (long) long while since we last blogged about CiviHR here on the CiviCRM news feed, but 2017 has been a transformative year for the project and I'm really excited to share an update on where CiviHR has got to and where we are heading in 2018.   What is CiviHR?  
Categories: CRM

Seven Changes in Digital Business Coming in 2018

Liferay - Tue, 01/09/2018 - 17:49

Businesses are constantly evolving in how they operate, the ways in which they reach target audiences and how they compete in their industries. While this is an ongoing process that is reshaped by changes in technology and the industry landscape, the start of a new year is an appropriate time to predict and take action on the trends and changes that are likely to come over the course of the next 12 months.

The following trends in digital business operations are likely to influence a multitude of industries and their businesses to varying degrees. And while these changes may require restructuring in both short- and long-term business plans, it is important to be aware and plan accordingly so that a company is not caught off guard by shifts in both modern business operations and customer expectations.

1. Contextual Targeting Strategies

The protections created by General Data Protection Regulation (GDPR) mean that businesses will need to leverage customer data carefully and in compliance with these rules regarding how companies utilize customer data in marketing. In the place of audience targeting that can be hampered by GDPR, Forbes shows that contextual targeting strategies can help businesses market to potential customers based on highly detailed contextual information, which conforms to data use standards while still creating effective personalized marketing.

2. Enhancing Security in Payments

In 2017, major data breaches and identity theft highlighted the vulnerability of many organizations and the effects of having sensitive customer information compromised. The Star highlights that businesses will be working to counteract such threats with many different modern solutions. Everything from the use of blockchain to biometrics will be leveraged to an even greater degree in an effort to enhance security measures as digital payments increase and customers use an even greater variety of payment methods.

3. A Surge in Wide Area Networks

Recently released software-defined wide area network (WAN) technology will likely see a major increase in adoption by businesses during 2018, as discussed by Dimension Data. By establishing a communications network that can span a large geographic area but is still tightly controlled by its creators, a WAN can help businesses connect offices and better handle data, system architecture, traffic flow and more. This WAN adoption reflects how modern businesses are attempting to create large networks that balance both widespread data accessibility and needed security.

4. The Increasing Importance of Transparency

Marketing Dive points out that consumers are calling for transparency from businesses regarding their practices, partnerships and policies more than ever. This increasing awareness from audiences means that companies will need to not only be able to show audiences how they operate, but also still market effectively and protect sensitive company information. Companies who are looking to appease the demands of customers must create a strategy for effective transparency that does not compromise business processes.

5. Widespread Use of Robotic Assistants

The rise of robotic software such as chatbots and automation tools has been well publicized and heavily touted by businesses for several years now. But expect 2018 to be the year when these robotic assistants feel like a commonly expected aspect of doing business for most major companies. This acceptance from both employees and customers will cause audiences to expect more personalized experiences across industries. As discussed by CMS Wire, these expectations will create a gap between businesses already providing such experiences and those who do not.

6. The Idea of “Millisecond Marketing”

Reaching customers and responding to their interests is becoming a continually faster process. Thanks to the use of A.I. and various modern technologies, businesses are able to compare customer history with numerous potential advertisements in mere seconds to make the most of marketing opportunities. Expect 2018 to see companies work to achieve “millisecond marketing,” according to AdWeek. This uses complex algorithms to market on an individual basis faster than what is possible in the hands of humans. While this may not truly be achievable by most companies, it is an idea that many will aspire to achieve.

7. Measuring by Cost Per Experiment

Understanding the successes and failures of various business ventures requires companies to find the true return on investment (ROI) of their efforts. With so many different channels and metrics being used by companies at the same time, Forbes predicts that businesses may begin to roll cost per impression (CPM), per click (CPC), per lead (CPL), per pixel (CPP) and more together into a cost per experiment (CPE) metric. This can create a more detailed and measurable view of the wide variety of efforts made by a business in 2018.

Prepare Your Business for Digital Transformation

As digital business continues to evolve, companies across all industries must equip themselves with the technology needed to enhance their operations. Learn how this is possible through the use of digital experience platforms.

Read “Digital Experience Platforms: Designed for Digital Transformation.”   Matthew Draper 2018-01-09T22:49:51Z
Categories: CMS, ECM

Drupal 8 Content Migration: A Guide For Marketers

Drupal - Tue, 01/09/2018 - 13:28

The following blog was written by Drupal Association Premium Supporting Partner, Phase2.

If you’re a marketer considering a move from Drupal 7 to Drupal 8, it’s important to understand the implications of content migration. You’ve worked hard to create a stable of content that speaks to your audience and achieves business goals, and it’s crucial that the migration of all this content does not disrupt your site’s user experience or alienate your visitors.

Content migrations are, in all honesty, fickle, challenging, and labor-intensive. The code that’s produced for migration is used once and discarded; the documentation to support them is generally never seen again after they’re done. So what’s the value in doing it at all?

YOUR DATA IS IMPORTANT (ESPECIALLY FOR SEO!) 

No matter what platform you’re working to migrate, your data is important. You’ve invested lots of time, money, and effort into producing content that speaks to your organization’s business needs.

Migrating your content smoothly and efficiently is crucial for your site’s SEO ranking. If you fail to migrate highly trafficked content or to ensure that existing links direct readers to your content’s new home you will see visitor numbers plummet. Once you fall behind in SEO, it’s difficult to climb back up to a top spot, so taking content migration seriously from the get go is vital for your business’ visibility.

Also, if you work in healthcare or government, some or all of your content may be legally mandated to be both publically available, and letter-for-letter accurate. You may also have to go through lengthy (read: expensive) legal reviews for every word of content on your sites to ensure compliance with an assortment of legal standards – HIPPA, Section 508 and WCAG accessibility, copyright and patent review, and more.  

Some industries also mandate access to content and services for people with Limited English Proficiency, which usually involves an additional level of editorial content review (See https://www.lep.gov/ for resources).  

At media organizations, it’s pretty simple – their content is their business!

In short, your content is an business investment – one that should be leveraged.

SO WHERE DO I START WITH A DRUPAL 8 MIGRATION?

Like with anything, you start at the beginning. In this case that’s choosing the right digital technology partner to help you with your migration. Here’s a handy guide to help you choose the right vendor and start your relationship off on the right foot.

Once you choose your digital partner content migration should start at the very beginning of the engagement. Content migration is one of the building blocks of a good platform transition. It’s not something that can be left for later – trust us on this one. It’s complicated, takes a lot of developer hours, and typically affects your both content strategy and your design.

Done properly, the planning stages begin in the discovery phase of the project with your technology vendor, and work on migration usually continues well into the development phase, with an additional last-sprint push to get all the latest content moved over.

While there are lots of factors to consider, they boil down to two questions: What content are we migrating, and how are we doing it?

WHICH CONTENT TO MIGRATE

You may want to transition all of your content, but this is an area that does bear some discussion. We usually recommend a thorough content audit before embarking on any migration adventure. You can learn more about website content audits here. Since most migration happens at a code & database level, it’s possible to filter by virtually any facet of the content you like. The most common in our experience are date of creation, type of content, and categorization.

While it might be tempting to cut off your site’s content to the most recent few articles, Chris Anderson’s 2004 Wired article, “The Long Tail” (https://www.wired.com/2004/10/tail/) observes that a number of business models make good use of old, infrequently used content. The value of the Long Tail to your business is most certainly something that’s worth considering.

Obviously, the type of content to be migrated is pretty important as well. Most content management systems differentiate between different ‘content types’, each with their own uses and value. A good thorough analysis of the content model, and the uses to which each of these types has been and will be used, is invaluable here. There are actually two reasons for that. First, the analysis can be used to determine what content will be migrated, and how. Later, this analysis serves as the basis of the creation of those ‘content types’ in the destination site.

A typical analysis takes place in a spreadsheet (yay, spreadsheets!). Our planning sheet has multiple tabs but the critical one in the early stages is Content Types.

Here you see some key fields: Count, Migration, and Field Mapping Status.

Count is the number of items of each content type. This is often used to determine if it’s more trouble than it’s worth to do an automated content migration, as opposed to a simple cut & paste job. As a very general guideline, if there are more than 50 items of content in a content type, then that content should probably be migrated with automation. Of course, the amount of fields in a content type can sway that as well. Once this determination is made, that info is stored in the Migration field.

The Field Mapping Status Column is a status column for the use of developers, and reflects the current efforts to create the new content types, with all their fields.  It’s a summary of the Content Type Specific tabs in the spreadsheet. More detail on this is below.

Ultimately, the question of what content to migrate is a business question that should be answered in close consultation with your stakeholders.  Like all such conversations, this will be most productive if your decisions are made based on hard data.

HOW DO WE DO IT?

This is, of course, an enormous question. Once you’ve decided what content you are going to migrate, you begin by taking stock of the content types you are dealing with. That’s where the next tabs in the spreadsheet come in.

The first one you should tackle is the Global Field Mappings. Most content management systems define a set of default fields that are attached to all content types. In Drupal, for example, this includes title, created, updated, status, and body. Rather than waste effort documenting these on every content type, document them once and, through the magic of spreadsheet functions, print them out on the Content Type tabs.

Generally, you want to note Name, Machine Name, Field Type, and any additional Requirements or Notes on implementation on these spreadsheets.

It’s worth noting here that there are decisions to be made about what fields to migrate, just as you made decisions about what content types. Some data will simply be irrelevant or redundant in the new system, and may safely be ignored.

In addition to content types, you also want to document any supporting data – most likely users and any categorization or taxonomy. For a smooth migration, you usually want to actually start the development with them.

The last step we’ll cover in this post is content type creation. Having analyzed the structure of the data in the old system, it’s time to begin to recreate that structure in the new platform. For Drupal, this means creating new content type bundles, and making choices about the field types. New platforms, or new versions of platforms, often bring changes to field types, and some content will have to be adapted into new containers along the way. We’ll cover all that in a later post.

Now, many systems have the ability to migrate content types, in addition to content. Personally, I recommend against using this capability. Unless your content model is extremely simple, the changes to a content type’s fields are usually pretty significant. You’re better off putting in some labor up front than trying to clean up a computer’s mess later.

In our next post, we’ll address the foundations of Drupal content migrations – Migration Groups, and Taxonomy and User Migrations. Stay tuned!

Written by Joshua Turton

https://www.phase2technology.com/blog/drupal-8-content-migration-guide-marketers

Categories: CMS

liferay-7 Themelet

Liferay - Tue, 01/09/2018 - 04:44

Themelets are small, extendable, and reusable pieces of code. They help to avoid duplication of code. Code that can be reused in other themes can be written in themelet which can be extended to any theme. Unlike Theme in Themelets we will be including only the files in which there are code changes.

 

Here i have lr-7-theme which looks like below

I will be creating a themelet using Theme-generator to modify the look and feel of the theme.

In command prompt navigate to your workspace or wherever you want your generated themelet to be and enter  yo liferay-theme:themelet

 

Theme generator would ask for Themelet name, Themelet id, version enter the values respectively.(Refer attachment below)

The generated themelet contains a package.json file with configuraion information and a src/css directory that contains a _custom.scss file. Just like a theme, all the updated files go into the src directory.

 

Themelet needs to be installed globally so that it can be recognized by generator.

Navigate to the themelet directory which got created and run the command npm link

This creates a globally-installed symbolic link for the themelet in your npm packages directory.

 

Add the required css to the created themelet to

lr-7-themelet\src\css\_custom.scss and save the file.

 

Navigate to your theme directory, to which this themelet has to be extended and execute the command: 

gulp extend

 

Follow the prompts

 

 

 

Select the themelet you want to extend

lr-7-theme is now extended to lr-7-themelet.

Deploy the theme using command gulp deploy

Below is the changed look & feel of the page

gulnaaz Shaik 2018-01-09T09:44:43Z
Categories: CMS, ECM

On-line Training: Back to the Basics

CiviCRM - Mon, 01/08/2018 - 12:29

Start out the new year right by learning best practices in CiviCRM!  I am offering all of our "Fundamentals of CiviCRM"  on-line classes during the month of January. We'll get started on Thursday, January 11th at 12 pm MT with the Fundamentals of Contact Management. The remaining weeks will focus on Membership, Events and Contributions, with a 2-hour session on each module. It's a great introduction for new users or an opportunity to refresh your knowledge if you are currently using CiviCRM.

Categories: CRM

7 Buenas prácticas de aplicaciones móviles para el entorno de trabajo

Liferay - Mon, 01/08/2018 - 07:55

En la actualidad, los empleados de todas las industrias y sectores utilizan sus dispositivos móviles para sus actividades laborales más que nunca antes lo habían hecho. En este contexto, además, demandan a las organizaciones aplicaciones móviles útiles que les asistan y faciliten su labor diaria. El conjunto de aplicaciones móviles con las que cuente su equipo pueden tener distinta naturaleza: desde herramientas de comunicación, hasta portales de provisión de información, etc. No obstante, lo mejor es que dichas aplicaciones estén diseñadas para dar a los empleados las capacidades suficientes para llevar a cabo de forma más ágil y sencilla su actividad diaria. Los siguientes siete ejemplos te ayudarán a entender cómo las aplicaciones móviles con las que cuente tu equipo pueden ayudarles a fortalecerse, mejorar su productividad y la calidad de su trabajo.

1.LGT SmartBanking

El grupo de gestión de activos y banca privada LGT decidió proporcionar a su personal una serie de aplicaciones móviles que les daba acceso tanto a servicios de la empresa, como a información personalizada, lo que llevó a la creación de SmartBanking LGT. Como una aplicación móvil flexible, SmartBanking fue capaz de integrarse con los sistemas legacy, así como permitir el desarrollo y crecimiento a largo plazo.

Por qué destaca: A través de esta aplicación, los empleados pueden llevar a cabo tareas de la organización, ver datos de clientes, conectarse con los sistemas de gestión más críticos y llevar a cabo sus tareas comerciales diarias. Todo, desde su dispositivo móvil. Además, una versión pública de SmartBanking está disponible para los clientes, lo que les permite realizar cualquier transacción necesaria en su smartphone y experimentar una mejor comunicación con los representantes de LGT.

2. Telx´s Security App.

Para equipar mejor a los gerentes de centros de datos, Telx, una compañía de interconexión y centros de datos, desarrolló una aplicación móvil que permite a los empleados agregar contactos a una base de datos segura, que inmediatamente se sincroniza con los equipos de seguridad in situ y permite así la incorporación de contactos, en cualquier momento, sin comprometer la seguridad de la información de la compañía.

Por qué destaca: A través de la aplicación, los gerentes del centro de datos, independientemente de dónde se encuentren, pueden conectarse rápidamente con los equipos de seguridad in situ, lo que les permite una comunicación rápida que complementa las necesidades diarias y garantiza la seguridad de la empresa. Además, la aplicación puede rediseñarse y reorganizarse rápidamente para adaptarse a aquellos usuarios en entornos de movilidad y que necesiten herramientas rápidas y fáciles.

3. CoachWeb, de Coach Inc.

Con más de 1.000 tiendas alrededor del mundo, Coach desarrolló CoachWeb, una intranet para empleados que mejoró los contactos entre las sedes y las tiendas, así como entre los distintos equipos. Siendo esta accesible desde los smartphones, los empleados de Coach pueden utilizar CoachWeb para recibir las últimas noticias, información útil, así como dar feedback en cualquier momento.

Por qué destaca: El hecho de tener más de 1.000 tiendas alrededor del mundo, y docenas de empleados en cada una de ellas, con sus propias necesidades diarias, significa que la accesibilidad a la información proporcionada por CoachWeb provee a sus trabajadores de la capacidad de comunicarse rápidamente, sin importar la tarea que se encuentre realizando cada uno. Como tal, los empleados de Coach están mejor equipados mientras llevan a cabo su trabajo en tienda, algo que no habrían podido conseguir con un portal típico tradicional.

4. Tap my back.

Los empleados agradecen ser reconocidos y apoyados en su entorno de trabajo, pero, en ocasiones, puede resultar difícil proveer ese reconocimiento en los entornos de trabajo actuales. Tap my back está diseñada para ser una herramienta de motivación y reconocimiento, que permite a los usuarios dar su reconocimiento a compañeros de trabajo y alentar el comportamiento positivo en el entorno laboral de una manera simple, para que sea visible a toda la empresa. Mientras que esta herramienta también provee información importante al equipo, la aplicación fue creada para reforzar la motivación y dar feedback con la finalidad de mejorar la cultura de trabajo.

Por qué destaca: La motivación de los empleados juega un papel importante en la cultura de la compañía, así como en las tasas de retención de empleados. Tap My Back saca partido a la facilidad de uso y a la velocidad de las aplicaciones modernas para alentar a los empleados a apoyarse mutuamente y mejorar la moral diaria de la oficina.

5. Aplicación de Family Dollar

La cadena de tiendas en constante expansión Familly Dollar, dotó a sus empleados de una aplicación móvil de evaluación de establecimientos. Esta aplicación provee a los responsables de cada área de formularios de calidad para cada tienda, con los que evaluar: la disposición, la organización del merchandising, los precios, el comportamiento de los empleados, la limpieza y demás, con el objetivo de asegurarse de que cada emplazamiento está ofreciendo una buena experiencia de cliente.

Por qué destaca: La aplicación de evaluación reemplaza por completo al proceso en papel, lo que supone un ahorro  importante en horas e inserción de datos cada semana para más de 600 empleados, resultando en un retorno de la inversión más rápido.

6. Aplicación de reportes de bugs de Apple.

Los empleados de Apple tienen acceso a un conjunto de aplicaciones exclusivas, únicamente disponibles para ellos, que les permite hacer llegar valiosas ideas y feedback útiles para la organización. La aplicación de reportes de bugs permite a los desarrolladores notificar con informes detallados cualquier bug que encuentren, incluyendo ejemplos de tests, capturas de pantalla y vídeos.

Por qué destaca: La aplicación de reportes de bugs aprovecha los conocimientos de los desarrolladores para encontrar y solucionar rápidamente cualquier posible error. El resultado es un catálogo de informes de bugs que ayudan a la empresa a encontrar y abordar rápidamente los posibles fallos antes de que éstos se conviertan en problemas serios.

7. Aplicación de formación de Red Robin.

En un esfuerzo por incrementar las tasas de retención de empleados, en Red Robin crearon una aplicación de formación que alentó a sus empleados a aumentar el grado de interactuación con la compañía. Algo que se logró a través de juegos y simulaciones, como “la búsqueda del tesoro”: un juego que combinaba formación y juego, con el cual los empleados se formaron al tiempo que se entretuvieron, con lo que se pretendía lograr una mayor tasa de satisfacción de los trabajadores.

Por qué destaca: Se ha demostrado que un empleado más comprometido no solo disfrutará más de su trabajo, sino que será más productivo. La aplicación de Red Robin ayudó a la compañía a lograr esto y, además, condujo a la creación de varias aplicaciones orientadas al cliente para el restaurante.

 

Elementos de valor de las aplicaciones móviles para el entorno de trabajo.

Pese a que cada una de estas siete aplicaciones móviles tienen sus propios objetivos específicos sobre cómo apoyar la labor diaria de los trabajadores, todas tienen la finalidad común de reforzar y empoderar a los empleados. Desde las que buscan mejorar las comunicaciones hasta las que pretenden proporcionar la información necesaria a tiempo real: cada una de ellas aprovecha la accesibilidad y la facilidad de uso que se encuentran en los dispositivos móviles actuales para satisfacer las necesidades diarias de los empleados de forma eficaz. A través de una buena planificación y prestando atención a las demandas específicas de su equipo de trabajo, cada una de estas empresas ha proporcionado aplicaciones móviles útiles para sus trabajadores, las cuales mejoran y facilitan las labores cotidianas de los mismos.

Crea tu propio conjunto de aplicaciones móviles de valor para el entorno de trabajo

Si estás considerando crear un conjunto de aplicaciones móviles para tu equipo de trabajo, es importante que evalúes las necesidades de sus empleados y cómo una aplicación puede ayudar a satisfacerlas.

Conoce como la Plataforma Móvil de Liferay puede ayudarte   Marta Dueñas González 2018-01-08T12:55:16Z
Categories: CMS, ECM

Como Estimar Custos de Longo Prazo do Desenvolvimento de Aplicativos Móveis

Liferay - Fri, 01/05/2018 - 15:14

O desenvolvimento de aplicativo móvel vai além do custo inicial para lançar um produto. Para ter êxito no desenvolvimento deve-se levar em consideração estratégias de longo prazo e planos sobre o crescimento e uso do aplicativo no futuro.

Com o intuito de certificar a maior possibilidade de êxito do seu aplicativo móvel, é importante que você tenha os custos estimados de maneira detalhada. Isto incluirá não só o desenvolvimento e lançamento de um aplicativo bem-feito, mas também qual será o custo a longo prazo de manter e atualizá-lo para manter os usuários satisfeitos, prevenir seu desuso e atender às demandas que fizeram você criar o aplicativo antes de mais nada.

Enquanto inicialmente os custos de lançamento terão grande papel em como você planeja e molda seu aplicativo, são os custos  a longo prazo que irão  determinar como sua aplicação impacta seu orçamento e objetivos ano após anos. Aplicações efetivas e quando usadas devidamente demonstram naturalmente seu valor e em distintas situações. Para ser eficaz, leve em consideração os seguintes fatores de custo a longo prazo para que o desenvolvimento tenha sucesso e seja planejado para o futuro.

Estrutura Inicial

A estrutura do seu aplicativo não só determinará o custo inicial de desenvolvimento como irá intervir bastante o custo a longo prazo. O sistema operacional de escolha, recursos como login, implementação de mídia social e criação de perfis irão influenciar o gasto para manter as funcionalidades que compõem o aplicativo. Além disso, a complexidade do design do seu aplicativo influenciará a forma como as atualizações detalhadas devem ser efetivas.

De acordo com um relatório do Standish Group, aproximadamente 45% de recursos típicos de aplicativos web e móvel nunca são usados, com adicionalmente 19% de recursos usados raramente. Estas funcionalidades desnecessárias significam que uma grande parte do que compõe seu app está custando dinheiro com muito pouco ou nenhum retorno sobre o investimento. A remoção de recursos não utilizados, que podem ser determinados através de análises precisas, desde o início do desenvolvimento do seu aplicativo não só diminuirá o custo inicial de desenvolvimento, mas também reduzirá o custo de manutenção futura e atualizações.

Frequência de Atualizações

Com que frequência você pretende atualizar seu aplicativo móvel e serão ajustes pequenos ou grandes revisões? Embora atualizações regulares sejam necessárias para garantir que o app esteja funcionando corretamente e crescendo para atender as mudanças constantes na base de usuários, estas mudanças irão afetar a longo prazo os custos. Atualizações pequenas e frequentes podem fornecer ajustes mínimos, mas necessários, enquanto grandes atualizações fornecem grandes revisões e mudanças nas principais partes do aplicativo. Em ambos os casos, uma empresa precisará determinar qual é o melhor equilíbrio entre fornecer um aplicativo consistente e seu orçamento anual.

Manutenção do Aplicativo

O que faz parte da manutenção do aplicativo? Isso é um processo contínuo que inclui o gerenciamento e a correção de problemas, back-up de dados, implementação de pequenos aprimoramentos necessários, prevenção de bugs futuros e mais. Todos estes aspectos de manutenção mantêm os aplicativos funcionando como deveriam, para fornecer aos usuários a experiência que eles esperam. De acordo com a pesquisa, os custos anuais típicos de manutenção de aplicativos ficam entre 15% e 20% dos custos de desenvolvimento originais, o que ajudará a determinar uma perspectiva de custo geral para o futuro da seu aplicativo. Além disso, esta porcentagem projetada significa que você deve tomar decisões baseadas no custo de desenvolvimento inicial da sua aplicação. 

Aplicativo Híbrido ou Nativo

Você irá construir um aplicativo híbrido ou nativo? Estes dois tipos de app possuem características particulares em seu desenvolvimento e manutenção, o que impactará sua estratégia móvel a longo prazo. Além disso, as atualizações de aplicativos híbridos permitem que uma única atualização seja aplicada a todos os aplicativos, enquanto aplicativos nativos exigirão que as atualizações sejam aplicadas com base em seu sistema operacional, o que leva a um trabalho adicional e um aumento no custo.

Custos Além do Desenvolvimento

Embora os custos de desenvolvimento sejam nosso foco principal, os gastos relacionados ao desenvolvimento devem ser levados em consideração. Esses custos incluem o marketing para impulsionar usuários novos e que retornaram para seu aplicativo, taxas da App Store e do Google Play, o custo de executar servidores e suporte de back-end, taxas contábeis e legais e quaisquer custos extra da equipe de desenvolvedores fora dos custos diretos de desenvolvimento e manutenção do aplicativo.

Conheça os Detalhes dos Custos do App

Existem inúmeras ferramentas disponíveis on-line que irão ajudá-lo a combinar estas estimativas para um esboço do custo do seu aplicativo. No entanto, muitos deles determinam apenas o custo inicial do desenvolvimento e não os custos a longo prazo. Leve estes fatores em consideração e lembre-se de que eles podem mudar ao longo do tempo, o que exigirá que você repense o custo do seu aplicativo e o que você pode fazer para atender estas mudanças. O sucesso a longo prazo pode vir de estimativas precisas, uma vontade de ajustar seu aplicativo móvel para atender às necessidades do seu público e um orçamento que ajudará a alcançar o sucesso a longo prazo.

  O que torna um aplicativo efetivo a longo prazo?

Sempre há o que aprender quando se trata de criar aplicativos móveis eficientes que atendam às suas necessidades únicas.

Saiba como a plataforma móvel da Liferay pode fortalecer seu negócio   Isabella Rocha 2018-01-05T20:14:34Z
Categories: CMS, ECM

Show live or logged in users

Liferay - Fri, 01/05/2018 - 07:29

This blog is for creating an interface which will show number of live users and some other details.

First of all create a portlet which will provide interface and then create a hook within the portlet.Below link willl help to create hook in portlet.

http://www.learnandpost.com/2012/05/create-hook-inside-portlet-liferay.html

view.jsp

Add below line of code in portlet view.jsp

<%@page import="com.liferay.portal.service.UserLocalServiceUtil"%> <%@page import="com.liferay.portal.model.User"%> <%@page import="java.util.Iterator"%> <%@page import="com.services.loggedinuser.LoggedInUsers"%> <%@page import="java.util.Set"%> <%@ taglib uri="http://java.sun.com/portlet_2_0" prefix="portlet" %> <portlet:defineObjects />   <style> table {     font-family: arial, sans-serif;     border-collapse: collapse;     width: 100%; }   td, th {     border: 1px solid #dddddd;     text-align: left;     padding: 8px; } th { background-color: #e1e1e1; } tr:nth-child(even) {     background-color: #eee; } </style>   <div style="border:1px solid #ccc;padding:5pt;"> <b>Total number of Live Users : </b>   <% Set<Long> users_set = LoggedInUsers.loggedInUsers(); %> <b><%=users_set.size() %></b><br/> <table> <thead> <tr><th>User Id</th><th>Screen Name</th><th>Full Name</th><th>IP Address</th><th>Login Date</th></tr> </thead> <tbody> <% Iterator<Long> it = users_set.iterator(); while(it.hasNext()){ long user_id = it.next(); User user = UserLocalServiceUtil.getUser(user_id); String fullName = user.getFullName(); String ipAddress = user.getLastLoginIP(); String screenname = user.getScreenName(); String loginDate = user.getLoginDate().toString(); %> <tr> <td><%=user_id %></td> <td><%=screenname %></td> <td><%= fullName%></td> <td><%=ipAddress %></td> <td><%=loginDate %></td> </tr> <% } %> </tbody> </table> </div>   Now it s time to work on hook which we have create and which will tell about live users. liferay-hook.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.2.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_2_0.dtd">   <hook> <portal-properties>portal.properties</portal-properties> </hook> portal.properties login.events.post=com.services.loggedinuser.LoginPostAction  servlet.session.destroy.events=com.services.loggedinuser.SessionDestroyAction   Create LoginPostAction and SessionDestroyAction class. LoginPostAction package com.services.loggedinuser; import com.liferay.portal.kernel.events.Action; import com.liferay.portal.util.PortalUtil;   import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;   public class LoginPostAction extends Action { @Override public void run(HttpServletRequest request, HttpServletResponse response) { long userId = PortalUtil.getUserId(request); LoggedInUsers.addUser(userId); System.out.println("Logged in user :"+userId); } }   SessionDestroyAction package com.services.loggedinuser; import com.liferay.portal.kernel.events.SessionAction; import com.liferay.portal.kernel.util.WebKeys;   import javax.servlet.http.HttpSession;   public class SessionDestroyAction extends SessionAction{ @Override public void run(HttpSession session) { Long userId = (Long) session.getAttribute(WebKeys.USER_ID); LoggedInUsers.removeUser(userId); System.out.println("Log out user :"+userId); } } Also create one model object for live users.   LoggedInUsers  package com.services.loggedinuser;   import java.util.HashSet; import java.util.Set;   public class LoggedInUsers { private static Set<Long> users = new HashSet<Long>();   public static void addUser(long userId) { users.add(userId); }   public static void clearUsers() { users.clear(); }   public static void removeUser(long userId) { users.remove(userId); }   public static int countUsers() { return users.size(); }   public static boolean isLogged(long userId) { return users.contains(userId); }   public static Set<Long> loggedInUsers(){ return users; } }   Neha Goyal 2018-01-05T12:29:47Z
Categories: CMS, ECM

Keeping Customizations Up to Date with Liferay Source

Liferay - Thu, 01/04/2018 - 21:23

Welcome to the fourth entry in a series about what to keep in mind when building Liferay from source. First, to recap the previous entries in this series from last year:

  • Getting Started with Building Liferay from Source: How to get a clone of the Liferay central repository and how to build Liferay from source. Also some tools that can help you setup your IDE (whether it's Netbeans, Eclipse, or IntelliJ) to navigate that portal source.
  • Troubleshooting Liferay from Source: How to make changes to the source code in Liferay's central repository and deploy changes to individual Liferay modules. Also some tips and tricks in troubleshooting issues within Liferay itself.
  • Deploying CE Clustering from Subrepositories: How to take advantage of smaller repositories to deploy clustering to a CE bundle. How to maintain changes to the code base without having to clone or build the massive Liferay central repository.

Continuing onward from there, one of the practical reasons why would you might want to be able to build Liferay from source is simply to be able to keep your Liferay installation up to date.

However, keeping your Liferay installation up to date involves a lot more than just rebuilding Liferay from source. After all, while many of Liferay's customers start with the noble goal of staying up to date with Liferay releases (and they're given binaries that don't require them to build from source), for a variety of reasons, these updates wind up delayed.

Among these reasons is that Liferay is a platform you can customize. Yet, when you customize Liferay with an incomplete understanding of Liferay's release artifacts (and as a consequence, an incomplete understanding of your own release artifacts if they depend on Liferay's release artifacts), your customizations will mysteriously stop working when you apply an update. When this happens, your ability to apply the update gets stalled.

This entry will talk about some of the struggles that you are likely to encounter whenever you try to keep your own customizations up to date whenever you update Liferay. In order to provide you with a more concrete example that you can use to understand the different roadblocks, this entry will walk you through a customization that's compiled against an initial release of Liferay, and the hardships you're likely to face if you tried to deploy it with an incomplete understanding of Liferay's release artifacts.

A Minimal Customization, Part 1

In this entry, we'll go over a simple customization. I say simple, but it's one that will have failed to initialize in all 34 of the past 34 fix packs. Additionally, even if you managed to make it work the first fix pack using naive approaches, you would have needed to update it again in 24 out of the following 33 fix packs. This would occur even if you wrote the code exactly as Liferay intended for you to write it.

This customization is deceptively simple: modifying Liferay's terms of use.

Why You Modify Terms of Use

There are many reasons to customize a terms of use, but one very prominent reason is fast approaching.

On May 28, 2018, the General Data Protection Regulation (GDPR), will go into effect. There are a great many things involved in compliance, and among them is the fact that those who collect data on citizens of the member countries of the European Union will be held to a higher standard when it comes to obtaining consent.

Chapter I, Article 4 of the GDPR defines consent as "any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her".

With that in mind, a terms of use agreement is a simple way to receive consent, even though there are internet memes about whether people actually read them, and academic studies showing that many college students probably do not.

How Terms of Use is Implemented

By default, Liferay records whether you have agreed to its terms of use through a single boolean flag in the database: agreedToTermsOfUse in the User_ table.

When dispatching any portal request, PortalRequestProcessor will force any authenticated user who has not agreed to the terms of use through to the Terms of Use page.

To present the user with for the terms of use agreement, you can trace the logic for 7.0.x to a single JSP named terms_of_use_default.jsp that you can modify directly in the Liferay portal source code (or even from the release artifact binary).

How Terms of Use is Customized

If modifying this single JSP inside of the Liferay web application archive is insufficient for your needs (for example, you want to use services provided by a module), Liferay provides a mechanism for more elaborate customizations: a TermsOfUseContentProvider.

By default, Liferay provides you with a single example of this that allows you to configure a piece of web content to serve as the terms of use in place of the default terms of use.

In theory, because you can embed portlets inside of web content in the same way you embed them in a theme or layout template (Embedding Portlets in Themes and Layout Templates), the default implementation of TermsOfUseContentProvider provided in journal-terms-of-use can be very flexible from a user interface perspective. In practice, until you've actually agreed to terms of use, a lot of requests that dispatch to pages do not work until you've agreed to those terms of use, and portlet requests always go to pages.

We can start creating a new one inside of a Liferay workspace using the following command:

blade create -t service \ -s com.liferay.portal.kernel.util.TermsOfUseContentProvider \ -p com.example.termsofuse \ -c ExampleTermsOfUseContentProvider \ example-terms-of-use

If you check the interface (or you let your IDE populate all the methods in the interface so that it can compile), you find that TermsOfUseContentProvider requires implementing three methods:

  • includeConfig: This expects for you to use a RequestDispatcher to include a JSP. It is called from portal-settings-web, and you can view the area that renders it by navigating to Control Panel > Instance Settings, and in the Configuration tab, scroll down to the Terms of Use section. The one you see by default comes from the com.liferay.journal.terms.of.use module.
  • includeView: This expects for you to use a RequestDispatcher to include a JSP. It is called from portal-web, and you can view the area that renders it if you have a user that has not agreed to the terms of use or by navigating directly to /c/portal/terms_of_use.
  • getClassName: On the surface, the method name suggests that one day, Liferay might allow you to have different terms of use for different types of assets (such as a separate terms of use for document library). However, at this time, this hasn't been implemented, and the lack of stable Map iteration also means that if you have multiple content providers with different class names, Liferay presents what is functionally equivalent to a random terms of use content provider for both view and configuration (source code).

As noted in the getClassName note above, the first thing you have to do before you even customize it is disable the existing implementation.

  • If you are building from source, you can achieve this by deleting osgi/modules/com.liferay.journal.terms.of.use.jar and then removing the file modules/apps/web-experience/journal/journal-terms-of-use/.lfrbuild-portal so that it doesn't get deployed again when you rebuild Liferay from source.
  • If you are using an older release rather than building from source, you can achieve this with an empty marketplace override of com.liferay.journal.terms.of.use.jar (namely, just a JAR with no classes), as described in Overriding LPKG Files.
  • If you are using an up to date release rather than building from source, you can achieve this in later versions of Liferay with Blacklisting OSGi Modules, and either using the GUI or using a configuration file to blacklist the com.liferay.journal.terms.of.use module.

With that in mind, let's assume that we've done that, and that we'd create a new implementation of TermsOfUseContentProvider. Here is what a set of empty method implementations might look like, which we would add to ExampleTermsOfUseContentProvider.java:

import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; // ... @Override public String getClassName() { System.out.println("Called getClassName()"); return ""; } @Override public void includeConfig( HttpServletRequest request, HttpServletResponse response) throws Exception { System.out.println("Called includeConfig(HttpServletRequest, HttpServletResponse)"); } @Override public void includeView( HttpServletRequest request, HttpServletResponse response) throws Exception { System.out.println("Called includeView(HttpServletRequest, HttpServletResponse)"); }

To get it to compile, we will need to update build.gradle to provide the dependencies that we need in order to compile these empty method implementations:

dependencies { compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0" compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1" compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0" } Attempt to Deploy the Stub Implementation

At this point, we have completed a stub implementation.

In general, whenever you work with a new extension point for the first time, you should stop as soon as you have a stub implementation and try a few small things to see if the extension point will work the way you expect it to. For a TermsOfUseContentProvider. But as you will soon see, your first unwieldy obstacle is getting it to deploy at all.

If you invoke blade gw jar, it will create the file build/libs/com.example.termsofuse-1.0.0.jar. If you're using a Blade workspace, you can set liferay.workspace.home.dir in gradle.properties and use blade gw deploy to have it be copied to ${liferay.home}/osgi/modules, or you can manually copy this file to ${liferay.home}/deploy.

When you do so, you will see a message saying that the bundle is being processed, but the bundle never starts.

If you check with the Gogo shell (Felix Gogo Shell) with the lb -s | grep example, you will see that it has stayed in the INSTALLED state. If you note the bundle ID that comes back (it's the first column in the list of results) then use diag #, where you replace # with the bundle ID, it will tell you why it's not in the ACTIVE state:

Unresolved requirement: Import-Package: com.liferay.portal.kernel.util; version="[7.0.0,7.1.0)"

If this is the first time you've seen an error message like this, you will want to read up on Resolving Bundle Requirements and Detecting Unresolved OSGi Components for a little bit of background before continuing.

The Naive Bundle Manifest

The previously linked documentation talks about how you can resolve the error, but if you're building up expertise rather than troubleshooting, I think it's also useful to understand what's causing the problem, and thus reach an understanding of why certain steps can fix that problem.

So, why does this error arise in the first place? Well, if you open up build/tmp/jar/MANIFEST.MF (which we describe in more detail in OSGi and Modularity for Liferay Portal 6 Developers), you should see the following lines:

Import-Package: com.liferay.portal.kernel.util;version="[7.0,7.1)",jav ax.servlet.http;version="[3.0,4)"

These lines are why the com.example.termsofuse-1.0.0.jar bundle asks for com.liferay.portal.kernel.util with the specified version range. This leaves us with two unanswered questions: (1) why does it ask for version 7.0 (inclusive) as a lower part of the range, and (2) why does it ask for version 7.1 (exclusive) as the upper part of the range?

Default Import-Package Lower Bound

First, that our bundle imports the com.liferay.portal.kernel.util package at all is because it is the package containing interface we implement, com.liferay.portal.kernel.util.TermsOfUseContentProvider.

This class comes from the com.liferay.portal.kernel dependency specified in build.gradle, and since we've specified version 2.0.0 of this dependency, we can find it in one of the subfolders of ${user.home}/.gradle/caches/modules-2/files-2.1/com.liferay.portal/com.liferay.portal.kernel/2.0.0.

Note: If this is the first time you've needed to check inside a .gradle cache, the folder layout is similar to a Maven cache except it uses the SHA1 as the folder name rather than as a separate file, and one SHA1 corresponds to a .pom file while the other corresponds to a .jar file. In some cases, there may be a third SHA1 that corresponds to the source code for the artifact.

If we check inside the META-INF/MANIFEST.MF file within the .jar file artifact, we'll find the following lines buried inside of it:

Export-Package: com.liferay.admin.kernel.util;version="1.0.0";uses:="c ... feray.portal.kernel.url;version="1.0.0";uses:="javax.servlet",com.lif eray.portal.kernel.util;version="7.0.0";uses:="com.liferay.expando.ke ...

And this is where we get the 7.0 as the lower bound on the version range: version 2.0.0 of the com.liferay.portal.kernel artifact exports version 7.0.0 of the com.liferay.portal.kernel.util package.

Default Import-Package Upper Bound

For many package import packages like the javax.servlet.http import, you'll notice that they take the form [<x>.<y>, <x+1>), where the upper part of the range essentially asks for the next major version. However, our com.liferay.portal.kernel.util is a lot less optimistic, instead choosing to have a version range of [<x>.<y>, <x>.<y+1>). The reason lies in how our code uses the classes from the packages we import.

In the case of javax.servlet.http, we're just using objects that implement the HttpServletRequest and HttpServletResponse interfaces. Whenever you simply consume an interface (or consume a class), the default accepted version range will be set to [<x>.<y>, <x+1>).

In the case of com.liferay.portal.kernel.util, we're implementing the TermsOfUseContentProvider interface. If you implement an interface, then the Import-Package statement will sometimes be optimistic by default and specify [<x>.<y>, <x+1>) and sometimes be more pessimistic by default and specify [<x>.<y>, <x>.<y+1>). In our case, it's chosen the more pessimistic default.

There are technical details that the creator of the interface needs to consider when deciding whether implementors need to be optimistic or pessimistic (The Needs of the Many Outweigh the Needs of the Few). These details center around fairly nebulous concepts like whether the interface is intended to be implemented by an "API provider" or by an "API consumer" (Semantic Versioning Technical Whitepaper), where an API provider and an API consumer are very abstractly defined.

However, from the side of someone implementing an interface, we can simply look at the end result, which is a commitment on the stability of the interface:

  • If it is marked as a ConsumerType (or not marked at all, since ConsumerType is assumed if no explicit annotation is provided), this interface is not allowed to change during a minor version increment to its package, so implementors do not need to worry about minor version changes
  • If it is marked as a ProviderType, this interface is allowed to change during a minor version increment to its package, so implementors do need to worry about minor version changes

This leads to the following default behavior when setting the upper version range on a package import involving an implemented interface:

  • If the interface we implement is annotated with the ConsumerType annotation (or not annotated at all, since ConsumerType is assumed if no explicit annotation is provided), we can be optimistic, and the default accepted version range will be set to [<x>.<y>, <x+1>)
  • If the interface we implement is annotated with the ProviderType annotation, we should be pessimistic, and the default accepted version range will be set to [<x>.<y>, <x>.<y+1>)

And this is where we get the 7.1 as the upper bound on the version range: TermsOfUseContentProvider is annotated with the ProviderType annotation, which means that minor version changes to the package might also include an update to the package we implement, so we should be conservative when specifying the accepted version ranges.

The Improved Bundle Manifest

So now that we know that the default behavior for our package import is [<x>.<y>, <x>.<y+1>), we have two options for getting our bundle to deploy. Either we can (a) choose a different dependency to generate a version range compatible with our installation automatically, or (b) set a broader version range manually.

Automatically Set Import-Package

In the case of (a), now that you know where the lower part of the range <x>.<y> comes from, you can change the dependency version of com.liferay.portal.kernel so that exports the same version of the package that is exported in your Liferay installation. For example, if you know that your version of com.liferay.portal.kernel is a snapshot release of 2.57.1, you can specify the following in your build.gradle:

compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.57.0"

However, how exactly do you find that value?

If you've built from source, all the versions are computed during the build initialization (specifically the ant setup-sdk step) and copied to .gradle/gradle.properties. If you open up that file, you'll find something that looks like this, which will give you both the module name and the module version.

com.liferay.portal.impl.version=x.y.z-SNAPSHOT com.liferay.portal.kernel.version=x.y.z-SNAPSHOT com.liferay.portal.test.version=x.y.z-SNAPSHOT com.liferay.portal.test.integration.version=x.y.z-SNAPSHOT com.liferay.util.bridges.version=x.y.z-SNAPSHOT com.liferay.util.java.version=x.y.z-SNAPSHOT com.liferay.util.taglib.version=x.y.z-SNAPSHOT

If you're curious where that information comes from, the bundle name is found inside build.xml as the manifest.bundle.symbolic.name build property (example here), while the bundle version is found inside bnd.bnd as the Bundle-Version (example here).

If you're working with a release artifact, then as documented in Configuring Dependencies, open up the portal-kernel.jar provided with your version of the Liferay distribution and check inside of META-INF/MANIFEST.MF for its version. This will provide you with what the version was at build time for portal-kernel.jar. If constantly unzipping .jar files gets to be too tedious, you can also look it up using a tool I created for seeing how Liferay's module versions have evolved over time: Module Version Changes Since DXP Release

However, the automatic approach has a limitation: Liferay does not release com.liferay.portal.kernel with every release of Liferay, but rather, each Liferay release uses a snapshot release of com.liferay.portal.kernel.

This isn't a big deal if the snapshot has a minor version like .1, because a packageinfo minor version increment will also trigger a bundle minor version increment, and so a .1 snapshot will have the same minor versions on its exports as the original .0 release.

However, when the snapshot has a minor version of .0, things get murky because of the fact that it's a snapshot: there was some package change between the previous minor version and the snapshot version, but it's not guaranteed to have been the package we are using. Additionally, if it wasn't our package that was update, our package might update between the snapshot used for the Liferay release, and the actual .0 for the artifact is published, because the Baseline Plugin allows all packages to experience a minor version increment up until the version is published and the baseline version changes.

As a result, you have to check both the release version and one minor version below to see which one you need to use in order to get the correct version range generated automatically. If you are implementing multiple interfaces from different packages within the same artifact, it's also theoretically possible that there is no version you can use to have the correct version range generated automatically.

  • DE-15 was released with a snapshot of 2.28.0. The snapshot version exports 7.22, version 2.27.0 exports 7.22, and version 2.28.0 exports 7.22.
  • DE-27 was released with a snapshot of 2.42.0. The snapshot version exports 7.30, version 2.41.0 exports 7.29, and version 2.42.0 exports 7.30.
  • DE-28 was released with a snapshot of 2.43.0. The snapshot version exports 7.31, version 2.42.0 exports 7.30, and version 2.43.0 exports 7.31.
Manually Set Import-Package

At this point, we've discovered that the automatic approach is hardly automatic at all, because we're still investigating the package versions of different artifacts. We also know that an automatic approach might fail. Given that we'll need to investigate all the artifacts and package versions anyway, how do we achieve (b)?

Since you're setting a version range, you will want to set the broadest version range that is known to compile successfully. To that end, from the OSGi perspective, you update bnd.bnd with a new Import-Package statement that is known to work, and this Import-Package will automatically be added to the generated META-INF/MANIFEST.MF. We also add * to tell the bnd to also include everything else it was planning to add.

In the case of a ProviderType (which is really the only time when this kind of problem happens), its API can change for any minor release. Therefore, we should only include version ranges where we know the package has not yet changed, and we should not project into the future beyond that. Therefore, if we know that our interface has its current set of methods at <a>.<i>, and it still has not changed as of <b>.<j>, we would choose the version range [<a>.<i>, <b>.<j+1>).

In the specific case of TermsOfUseContentProvider, it started with the current interface methods at version 7.0 of the package, and if you check the source code of the interface within DE-34 to confirm that it is still unchanged in the version of Liferay you are using, and you unzip portal-kernel.jar to check the META-INF/MANIFEST.MF to find its corresponding export version at 7.40.0. This means that we can use the following Import-Package statement of our bnd.bnd.

Import-Package: com.liferay.portal.kernel.util;version="[7.0,7.41)",*

If this process gets to be too tedious, you can also look it up using a tool I created for seeing how Liferay's package versions have evolved over time: Package Breaking Changes Since DXP Release

History of a Similar Customization

If the 7.0 to 7.41 version range did not immediately clue you in, then if you were to scan through the evolution of com.liferay.portal.kernel.util across different versions of Liferay, you'll have discovered that while the TermsOfUseContentProvider interface itself has not changed at all since the initial DXP release, the package it resides in is very frequently updated. In fact, it has changed in 25 of the past 34 fix pack releases.

Because both the automatic process and the manual process rely on minor versions, this means that no matter which route you chose, you would have needed to modify either your build.gradle or your bnd.bnd for each one of those releases, or your custom terms of use would have failed to deploy in 25 out of the past 34 fix packs.

This leads us to the following question.

Liferay has its own journal-terms-of-use, which we mentioned earlier in this entry, that implements the TermsOfUseContentProvider interface. Obviously it should run into the same issue. So, how has Liferay been keeping journal-terms-of-use up to date?

Import-Package Version Range Macro

At the beginning, journal-terms-of-use started with trying to solve the reverse problem: if we know that we aren't changing the API, how do we ensure that the bundle can deploy on older versions? The idea was built on a concept where we'd release the Web Experience package separate from the rest of Liferay, and we wanted this package to be able to deploy against older versions of Liferay.

With LPS-64350, Liferay decided to achieve this using version range macros inside of the bnd.bnd:

Import-Package: com.liferay.portal.kernel.util;version="${range;[=,=+)}",*

Essentially, this says that we know it works as of the initial major version release, and we know it will work up until the next minor version. From there, we'd update build.gradle with every release whenever we confirmed that we had not changed the interface with com.liferay.portal.kernel, and the version range macro would allow it to be compatible with all previous releases without us having to explicitly lookup the package version for the current release.

Gradle Dependency Version Range

However, after awhile, this got to be extremely tedious, because we were updating build.gradle with every release of com.liferay.portal.kernel.

From there, we came up with a seemingly clever idea. Essentially, Liferay was rebuilt at release time anyway, we could tell Gradle to fetch the latest release of com.liferay.portal.kernel. As a result, we'd simply re-compile Liferay, and this latest release combined with the version range macro would give us the desired version range automatically. This is functionally equivalent to replace the com.liferay.portal.kernel dependency with the following:

compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "[2.0.0,3.0.0)"

We later learned that this approach had two critical problems.

First, Gradle is not guaranteed to try to use the latest version of a dependency whenever you specify a range. Therefore, you might run into a situation where your portal would fail to deploy journal-terms-of-use simply because Gradle happened to choose something earlier than the latest dependency version.

Second, we might implement multiple interfaces that come from multiple packages published by the com.liferay.portal.kernel artifact. Because we only had a version range macro set for the com.liferay.portal.kernel.util package, the journal-terms-of-use module would suddenly fail to deploy if we were to rebuild it and deploy the bundle to an older Liferay release (such as when building for a hotfix) due to the other ProviderType interfaces it might have implemented, or if Liferay converted a ConsumerType interface into a ProviderType interface without incrementing the major version on the package (it's not required, similar to changing the byte-code compilation level, and so Liferay never does so).

Dependency Version as a Property

As a temporary stop-gap measure for building against older versions of Liferay, we needed a way to retain the old manifests. As noted before, there's a problem: the version of com.liferay.portal.kernel that accompanies a past release is an unpublished snapshot.

In theory, we could simply publish a snapshot to our local Gradle cache at build time and reference it, but internally at Liferay, our source formatter rules disallow using an un-dated snapshot as part of a dependency version. Luckily, there's a work around for that: because it's simply checking for the string, as long as something else provides the snapshot version (like a variable or a build property), we are allowed to use it.

So, our work around at the time was to take advantage of something that was automatically set inside of gradle.properties whenever you build Liferay from source. This can also be set manually for your own Blade workspace. Ultimately, the net effect of using a build property is that you update a single file and it can be referenced by all other custom modules within the same workspace, which is the same idea as using a Groovy variable or a Maven build property.

com.liferay.portal.kernel.version=2.57.1-SNAPSHOT

Once this property is set, there is an additional shorthand for using it. The Liferay Gradle plugin uses a LiferayExtension that allows us to use the name default in order to reference this property value, or substitutes the Apache Ivy alias latest.release (which Gradle happens to recognize) if the property has not been set.

As a result, our com.liferay.portal.kernel dependency looks like this:

compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "default"

If we then deploy the resulting .jar, all of our modules will compile against the specified version of com.liferay.portal.kernel. Compiled with the original values inside of bnd.bnd, it will fail at compile time for all modules with this pattern. We can then just update this property any time we're updating to a later fix pack or rebuilding Liferay from source to confirm that the ProviderType interfaces have not changed.

Treat it Like a ConsumerType

In order to fix the bug introduced with the Gradle versions while also retaining the intended result of being able to deploy a module like journal-terms-of-use on multiple versions of Liferay, the answer we arrived at in LPS-70519 was to simply treat TermsOfUseContentProvider like a ConsumerType when specifying version ranges, even though it's been marked as a ProviderType.

In other words, we manually set the lower bound to be the version of com.liferay.portal.kernel.util that is exported by the minimum com.liferay.portal.kernel that provides other API that journal-terms-of-use needs, and we set the upper bound to be the next major version after that, just as would happen automatically with implementing a ConsumerType interface or any other regular class usage.

Import-Package: com.liferay.portal.kernel.util;version="[7.15.0,8)",*

There are two downsides to this, both of which Liferay excepts.

The first is that the module advertises something that is technically untrue. Because it is a ProviderType, Liferay can modify TermsOfUseContentProvider before the next major release, and even though the module declares that it will work with every version of the package up through 8, this won't be true if the interface gets updated.

The second is that this approach results in journal-terms-of-use being unable to detect when we make binary-incompatible changes to TermsOfUseContentProvider. However, in practice, Liferay can get away treating this particular ProviderType as though it were a ConsumerType for journal-terms-of-use, because Liferay itself maintains both the interface and the implementation, and therefore a code reviewer would know if we changed the interface and know to update our implementation of that interface.

A Minimal Customization, Part 2

With all of that background information, we can now come back to our module and make it work.

Choose a Long-Term Solution

At this point, we have two exactly opposite solutions that we can use over the long term: (a) add the dependency version as a build property, or (b) treat the ProviderType interface as though it were a ConsumerType interface. With the former solution, you accept the idea that you will need to check each time you update, but you do it once per workspace rather than once per module. With the latter solution, you reject that as being too tedious and accept the risk that Liferay might one day change the ProviderType and your module will stop working.

If we'd like to accept the downside of constantly updating our com.liferay.portal.kernel version in order to ensure that the TermsOfUseContentProvider interface has not changed, we can set our dependency version as default and maintain gradle.properties with an up to date value of com.liferay.portal.kernel.version for each Liferay release you update to. This allows us to handle all ProviderType interfaces in one place at compile time and leads to the following bnd.bnd entry for Import-Package:

Import-Package: com.liferay.portal.kernel.util;version="${range;[=,=+)}",*

Because the version of com.liferay.portal.kernel has changed at compile time, it's likely that the manifest is also changing. Therefore, if you go with this solution, you will want to increment the Bundle-Version each time you update the properties value just as you might do with Maven artifacts that depend on changing properties values, because the binary artifact produced by the compilation will be changing alongside the properties value change.

If we'd prefer not to accept the downside of constantly updating our com.liferay.portal.kernel version, you can choose to treat TermsOfUseContentProvider as a ConsumerType. In this case, you'd leave com.liferay.portal.kernel at whichever minimum version you need for API compatibility, and add the following bnd.bnd entry for Import-Package:

Import-Package: com.liferay.portal.kernel.util;version="${range;[==,+)}",*

As noted earlier, you essentially give up the ability to check for binary compatibility at build time, and you will need to periodically check in on the ProviderType interfaces to make sure that they have not changed, because those changes will not be detected at build time and they will not be noticed at deployment time. You will likely only notice if you coincidentally wrote a functional test that happens to hit a page that invokes the new methods on the interface.

Successfully Deploy the Stub Implementation

Whichever route you choose, we have completed our updates to the stub implementation.

Just as before, if you invoke blade gw jar, it will create the file build/libs/com.example.termsofuse-1.0.0.jar. If you're using a Blade workspace, you can set liferay.workspace.home.dir in gradle.properties and use blade gw deploy to have it be copied to ${liferay.home}/osgi/modules, or you can manually copy this file to ${liferay.home}/deploy.

When you do so, you will see a message saying that the bundle is being processed, and then you will see a message saying that the bundle has started.

If you then navigate to /c/portal/terms_of_use, then assuming that you also disabled the com.liferay.journal.terms.of.use module as documented earlier, it will show you a completely empty terms of use rather than the default terms of use, and all you can do is agree or disagree to the empty page.

Just Beyond the Stub Implementation

While this article is focused on understanding road blocks rather than providing sample code, it would be a little disingenous to stop here and say that we have a functioning implementation of a terms of use override, so we'll take a few steps forward and bring in some additional information.

Just like any other component that needs to include JSPs, we'll need a reference to the appropriate ServletContext. You can find an example of this in Customizing the Control Menu and Customizing the Product Menu.

For our specific example, we might add the following to our bnd.bnd so that we can have a ServletContext:

Web-ContextPath: /example-terms-of-use

We would then then add the following imports and replace the content of the includeView method in ExampleTermsOfUseContentProvider.java, assuming that the bundle symbolic name from the steps so far is com.example.termsofuse (which it should be, by default):

import javax.servlet.RequestDispatcher; import javax.servlet.ServletContext; import org.osgi.service.component.annotations.Reference; // ... @Override public void includeView( HttpServletRequest request, HttpServletResponse response) throws Exception { System.out.println("Called includeView(HttpServletRequest, HttpServletResponse)"); _servletContext.getRequestDispatcher("/terms_of_use.jsp").include(request, response); } @Reference(target = "(osgi.web.symbolicname=com.example.termsofuse)") private ServletContext _servletContext;

If we then create a file named src/main/resources/META-INF/resources/terms_of_use.jsp with the content, <h1>TODO</h1> and redeploy our module, we'll now see the words "TODO" just above the agree and disagree buttons if you navigate to /c/portal/terms_of_use.

Minhchau Dang 2018-01-05T02:23:33Z
Categories: CMS, ECM

Formación Liferay: la especialización que aporta valor para compañías y profesionales

Liferay - Thu, 01/04/2018 - 07:35

El creciente protagonismo de la experiencia del cliente digital y de la implantación de los procesos de Transformación Digital en la estrategia de todas las organizaciones, de todos los sectores, ha hecho que la demanda de profesionales expertos en tecnologías que permitan llevar a cabo estos objetivos se incremente de forma notable. Conscientes de este hecho, desde Liferay hemos dado un impulso a nuestro Programa de Formación Oficial. Así, este año incrementamos el número de convocatorias públicas, a 20, y ampliamos las ciudades en las que se imparten: a las ubicaciones tradicionales como Madrid, Barcelona, Bilbao y Lisboa, este año se une Sevilla, que se presenta como nuevo emplazamiento dado el creciente número de solicitudes, así como de la importancia y consolidación de su núcleo empresarial. A esta convocatoria de Formación Oficial pública se suma la modalidad privada, disponible bajo demanda, y que mantiene su buena acogida dada la grata experiencia de los alumnos y los resultados de la formación.

Los cursos, de dos y tres jornadas, presentan un completo formato que combina tanto la parte teórica como la práctica. Así, los asistentes tendrán la oportunidad de abordar las dificultades más comunes al implementar y gestionar la herramienta de la mano de profesionales de Liferay involucrados directamente en el desarrollo del producto. Liferay Fundamentals, Application Developer, Platform Developer o System Administrator son algunos de los cursos más reconocidos y que aportan los conocimientos necesarios para sacar el máximo partido de la tecnología.

Para las organizaciones, contar con empleados acreditados en tecnologías por un fabricante es una garantía de profesionalidad, que asegura que los proyectos se desarrollen siguiendo las mejores prácticas, de la manera más eficiente y con la máxima calidad.

Además de este Programa de Formación Oficial, se han publicado nuevas convocatorias para obtener la Certificación Liferay DXP, documentación que completa y acredita los conocimientos y capacitación de los profesionales.

La formación Liferay como inversión

Liferay es una solución open source ampliamente extendida, por lo que existe una creciente demanda de perfiles con capacidades para implementar, gestionar y administrar proyectos con esta tecnología. El Programa de Formación Oficial de Liferay está diseñado para responder a esa doble necesidad:

  • La oportunidad para profesionales. Por un lado, permite completar la formación de profesionales que quieran dar un impulso a su carrera profesional, ser más competentes y contar con un valor diferencial frente otros candidatos.
  • La solución a la demanda de expertos por parte de las organizaciones. Y, por otro, posibilita a las empresas contar con expertos en tecnologías necesarias para abordar sus objetivos empresariales, permitiéndoles, así, abordar sus proyectos de manera satisfactoria, más eficiente y ágil.

Gracias a la oferta de Formación Oficial de Liferay, cuyos contenidos están provistos directamente desde el equipo de desarrollo e ingeniería de la compañía, los asistentes estarán formados en la última versión de Liferay para cada una de las especialidades que se abordan en cada caso: gestión de contenidos, administración del sistema, desarrollo y personalización de la plataforma, etc., estando así capacitados para hacer un buen uso y gestión de la plataforma.

Sea cual sea la dimensión de los proyectos en las organizaciones, los profesionales acreditados por Liferay conocerán al máximo la Plataforma de Experiencia Digital con la que alcanzar los objetivos empresariales, pudiendo optimizar el potencial de la herramienta para obtener así los mejores resultados. Con ello, las organizaciones consiguen un notable ahorro en el tiempo de desarrollo de sus proyectos y de gestión de inactividad por errores, agilizando la implantación de proyectos y, por ende, la consecución de objetivos.

Digital Experience Platform: el futuro

Las Plataformas de Experiencias Digitales forman parte del futuro de toda organización y Gartner ya considera el concepto de Plataformas Horizontales superado por esta otra definición “por la necesidad de crear soluciones que den respuesta a los retos derivados de la transformación digital”. Así, contar con profesionales expertos en el manejo de herramientas de este tipo, como Liferay, le permitirán ir un paso por delante de sus competidores. Con ello, será capaz de acometer los distintos tipos de proyectos necesarios para hacer frente a los procesos de Transformación Digital en su organización.

Programa de Formación Oficial de Liferay: la especialización de valor

Conoce todo lo que necesitas saber sobre una tecnología cada vez más demandada para dar respuesta a las exigencias de los procesos de Transformación Digital en las organizaciones.

Conoce la Formación Oficial de Liferay   Marta Dueñas González 2018-01-04T12:35:09Z
Categories: CMS, ECM
Syndicate content