AdoptOS

Assistance with Open Source adoption

ECM

Melhor portal para funcionários

Liferay - Fri, 06/15/2018 - 17:29
A Criação do Melhor Portal Começa pela Empatia

Se sua empresa está passando por uma transformação digital, contar com um portal de funcionários para ajudar a gerenciar as mudanças culturais é extremamente importante. Muitos dos principais desafios da transformação digital têm a ver com o gerenciamento de mudanças e o fortalecimento de seus funcionários, duas coisas que portais para funcionários bem projetados podem suportar.

A tendência atual de portais para funcionários é geralmente chamada de intranet social ou intranet 2.0. Em vez de sites com informações estáticas, os melhores exemplos de hoje incorporam recursos sociais, como blogs, mensagens instantâneas e comentários. Essa mudança tem sido frequentemente creditada ao novo público de millennials, que está acostumado a ter essas novas formas de comunicação disponíveis em todos os momentos.

Para criar soluções que esse público-alvo realmente use, as equipes de TI devem tratar os funcionários como clientes ao projetar uma nova intranet ou  portal para funcionário. A experiência do cliente é tão importante aqui quanto no site público. A equipe de desenvolvimento precisa ficar aberta para feedbacks e melhorias. A recompensa por esse trabalho será evidente na melhoria da retenção de funcionários, que pode ser vista como a medida definitiva para o sucesso do portal do funcionário.

Seis Perguntas para Fazer Antes de Começar

Há duas coisas que devem ser levadas em consideração em qualquer portal para funcionários desde o início do projeto.

A primeira é a acessibilidade, o que significa que deve ser fácil de usar para que as pessoas não fiquem confusas tentando realizar seu trabalho.

A segunda é o conteúdo relevante, de modo que o portal tenha material suficiente por conta própria, para que as pessoas realmente o usem. Idealmente, o site ganharia impulso suficiente para que os funcionários contribuíssem regularmente com conteúdo e usassem suas ferramentas por iniciativa própria.

Para abordar essas duas áreas, aqui estão seis perguntas que você pode fazer para começar a criar um portal para funcionários que as pessoas realmente usarão:

# 1 Por que Precisamos de um Portal para Funcionários?

Se você não tem uma visão sólida para o seu portal, o processo de design rapidamente perderá o foco. As empresas costumam citar as seguintes razões para a criação de um portal para funcionários ou intranet:

  • Aumento da produtividade
  • Comunicação corporativa unificada
  • Processos de negócios simplificados
  • Gerenciamento de conhecimento e colaboração mais fáceis
  • Integração digital e treinamento
# 2 Qual é o ROI?

O retorno sobre Investimento (ROI) mais claro é muitas vezes a eliminação de processos ineficientes, como funcionários gastando tempo procurando documentos em vários repositórios. A definição das melhores oportunidades para o ROI ajudará você a priorizar o desenvolvimento durante os estágios posteriores do projeto.

# 3 Quando Olhamos para as Nossas Ferramentas Atuais, o que Não Funciona como Deveria?

Esta questão começa a desenvolver  empatia com seus usuários finais. Por exemplo, a maioria das empresas tem pelo menos um processo que depende de alguns funcionários inserindo dados manualmente em uma planilha do Excel. Esse sistema pode funcionar para uma pequena quantidade de dados, mas não é escalável à medida que a empresa cresce. Ou você pode descobrir que os funcionários nunca usam o recurso de pesquisa em sua intranet atual porque não retornam os resultados corretos.

É aqui que você inicia seu plano de design. Ao consertar um processo frustrante, você pode ganhar credibilidade e conquistar seu público no início do projeto.

# 4 O que Nossos Funcionários Precisam Fazer?

As pessoas não gostam de mudanças, mesmo que você prometa que o novo portal que você está construindo facilitará a vida delas depois que elas se acostumarem. O primeiro lançamento do portal para funcionários deve ter as tarefas que as pessoas precisam fazer - registrar quadros de horários, solicitar férias - e colocar essas funções completamente dentro da nova solução, para que os funcionários tenham que usá-la. Isso pode parecer agressivo, mas deve-se atribuir a responsabilidade àequipe de TI por criar um portal que seja realmente eficiente, confiável e fácil de navegar. Apesar da maioria dos conselhos para intranets que você encontra online, projetar o melhor portal para funcionários não é fazer algo bonito. É sobre fazer algo prático. Isso, se bem feito, impulsiona o uso e o engajamento.

# 5 De quais Ferramentas Nossos Funcionários Gostam?

Projetar sua experiência do usuário baseadas em ferramentas que sua equipe já gosta pode ajudar a criar um envolvimento antecipado, porque a experiência do portal parecerá familiar. Se seus funcionários adoram usar as páginas do Facebook para planejar eventos da empresa, crie uma plataforma social interna que inclua esses mesmos recursos.

# 6 Como Nossos Funcionários Colaboram?

É difícil obter a colaboração eficaz através de um portal, mas ele pode trazer alguns dos maiores ganhos em aumentar a produtividade e facilitar a comunicação. Se você puder obter uma compreensão profunda dos pontos problemáticos em torno da colaboração, você se posicionará para desenvolver um portal com ROI significativo. Reserve um tempo pesquisando como os usuários realizam um projeto do início ao fim, anotando todas as ferramentas que usam no processo, desde notas adesivas até quadros de comunicações a mensagens de texto.

Os Melhores Portais de Funcionários Pagam por si Mesmos

Finalmente, o melhor portal para funcionários é aquele que necessita de  menos recursos para gerenciar, ao mesmo tempo que possui o maior impacto na eficiência e no engajamento. Sua solução deve ser algo que os funcionários se sintam confortáveis usando e deve haver adesão suficiente para que você esteja recebendo muitos  feedbacks e solicitações de novos recursos. No entanto, ele também deve ser suportado por uma plataforma de portal que possa implementar essas novas funcionalidades sem um enorme comprometimento de recursos. O equilíbrio certo depende dos objetivos da sua organização, mas, na maioria dos casos, os benefícios de um portal para funcionários robusto mais do que se pagam.

  Isabella Rocha 2018-06-15T22:29:50Z
Categories: CMS, ECM

Liferay Portal 7.1 Beta 3 Release

Liferay - Tue, 06/12/2018 - 11:52
I'm pleased to announce the immediate availability of: Liferay Portal 7.1 Beta 3
 
  Download Now: Fixed in Beta 3

New Features Summary

Modern Site Building: Liferay 7.1 introduces a new way of adding content.  Fragments allows a content author to create content in small reusable pieces.  Fragments can be edited in real time or can be exported and managed with the tooling of your choice.  Use page templates from within a site and have complete control over the layout of your content pages.  Navigation menus now give you complete control over site navigation.  Create site navigation in new and interesting ways and have full control over the navigations visual presentation.        Forms Experience: Liferay 7.1 includes a completely revamped forms experience.  Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form.  Using the new Element Sets, form creators can now create groups of reusable components.  Forms fields can now be translated into any language using any Liferay locale and can also be easily duplicated. 
    Redesigned System Settings: System Settings has received a complete overhaul.  Configurations have been logically grouped together making it easier than every before to find what's configurable.  Several options that were located on Server Administration have also been moved to System Settings.     User Administration: User account from has been completely redesigned.  Each form section can now be saved independently of each other minimizing the chance of losing changes.  The new ScreensNavigationEntry let's developers add any form they want to user administration.     Improvements to Blogs and Forums:  Blog readers a can now un-subscribe to notifications via email. Friendly URLs used to be generated based on the entries title. Authors now have complete control over the friendly URL of the entry.   Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.     Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch.  Message boards users can now attach as many files as they want by dragging and dropping them in a post.  Message boards also has had many visual updates.     Workflow Improvements: Workflow has received a complete UI overhaul.  All workflow configuration is now consolidated under one area in the Control Panel.  Workflow definitions are now versioned and previous versions can now be restored.  Workflow definitions can now be saved in draft form and published live when they are ready.     Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 and the inclusion of Tomcat 9.0.  At the time of this release  JDK 8 is still the only supported JDK.  
Documentation Documentation for Liferay 7.1 is well underway.  Many sections have already been completed in the Deployment and Development Sections.  For information on upgrading to 7.1 see the Upgrade Guide. Jamie Sammons 2018-06-12T16:52:54Z
Categories: CMS, ECM

Why we need a new liferay-npm-bundler (3 of 3)

Liferay - Fri, 06/08/2018 - 03:15
A real life example of the use of bundler 2.x

This is the last of a three articles series motivating and explaining the enhancements we have done to Liferay's npm bundler. You can read previous article here.

To analyze how the bundler works we are going to examine a real life example comprising a portlet and an OSGi bundle providing Angular so that the portlet can import it. The project looks like this:

 npm-angular5-portlet-say-hello
     package.json
        {
            "name": "npm-angular5-portlet-say-hello",
            "version": "1.0.0",
            "main": "js/angular.pre.loader.js",
            "scripts": {
                "build": "tsc && liferay-npm-bundler"
            }
            …
        }
     tsconfig.json
        {
            "target": "es5",
            "moduleResolution": "node",
            …
        }
     .npmbundlerrc
        {
            …  
            "exclude": {
                "*": true
            },
            "config": {
                "imports": {
                    "npm-angular5-provider": {
                        "@angular/animations": "^5.0.0",
                        "@angular/cdk": "^5.0.0",
                        "@angular/common": "^5.0.0",
                        "@angular/compiler": "^5.0.0",
                        …
                    },
                    "": {
                        "npm-angular5-provider": "^1.0.0"
                    }
                }
            }
        }
     src/main/resources/META-INF/resources/css
         indigo-pink.css
            …  
     src/main/resources/META-INF/resources/js
         angular.pre.loader.ts
            // Bootstrap shims and providers
            import 'npm-angular5-provider';
            …  
        …
     npm-angular5-provider
         package.json
            {
                "name": "npm-angular5-provider",
                "version": "1.0.0",
                "main": "bootstrap.js",
                "scripts": {
                    "build": "liferay-npm-bundler"
                },
                "dependencies": {
                    "@angular/animations": "^5.0.0",
                    "@angular/cdk": "^5.0.0",
                    "@angular/common": "^5.0.0",
                    "@angular/compiler": "^5.0.0",
                    …
                }
                …
            }
     src/main/resources/META-INF/resources
         bootstrap.js
            /**
              * This file includes polyfills needed by Angular and must be loaded before the app.
            …  
            require('core-js/es6/reflect');
            require('core-js/es7/reflect');
            …
            require('zone.js/dist/zone');
            …
    …

You can find the whole project available for download here. Also, keep in mind that it is supposed to be run in Liferay 7.1.0 B2 at least (download it from here). It will not work in Liferay 7.0.0 unless you do some modifications!

As you can see, the portlet's build process includes calling the Typescript compiler (tsc) and then the bundler. We need to invoke tsc because Angular is based on the Typescript language and tsc is responsible for transpiling it to ES5. The Typescript compiler is configured in the tsconfig.json file and it is important that we set its output to es5 and its module resolution to node. That is because the bundler always expects that the input JS files are in those language and module formats.

Next, have a look at .npmbundlerrc where the imports for Angular are configured. Please note that we also import npm-angular5-provider with no namespace because we are going to invoke one of its modules to bootstrap Angular shims: see the angular.pre.loader.ts file, where npm-angular5-provider is imported. That import, in turn, loads npm-angular5-provider's main file (bootstrap.js).

Also, pay attention to the exclude section where every dependency of npm-angular5-portlet-say-hello is excluded to prevent Angular from appearing inside its JAR. This makes the build process faster and optimizes deployment but don't worry if you forget to exclude any unneeded dependency because nothing will fail: it just won't be used and will use a bit more space than needed.

The setup for npm-angular5-provider is very simple. It just declares Angular dependencies and invokes liferay-npm-bundler to bundle them. No need to do anything in this project. However, note how it also includes the bootstrap.js that is responsible for loading some shims needed by Angular. This file must always be invoked (by importing npm-angular5-provider from any portlet using it) before any portlet is run so that Angular doesn't fail because of missing APIs.

To finish with, check out the indigo-pink.css file of npm-angular5-portlet-say-hello. To keep this example simple, we have copied this file from the @angular/material npm package. It contains a prebuilt theme suitable for the Angular's Material Design widgets framework. In a real setup, that file's styles should be provided by a Liferay theme instead of being directly bundled inside each portlet needing it.

Now, suppose we run both builds. Let's see how the output would look like:

 npm-angular5-portlet-say-hello
     build/resources/main/META-INF/resources
         package.json
            {
                "dependencies": {
                    "@npm-angular5-provider$angular/animations": "^5.0.0",
                    "@npm-angular5-provider$angular/cdk": "^5.0.0",
                    "@npm-angular5-provider$angular/common": "^5.0.0",
                    "@npm-angular5-provider$angular/compiler": "^5.0.0",
            …
         js
             angular.loader.js
                "use strict";

                Liferay.Loader.define(
 ➥ "npm-angular5-portlet-say-hello@1.0.0/js/angular.loader",
 ➥ ['module', 'exports', 'require',
 ➥ '@npm-angular5-provider$angular/platform-browser-dynamic',
 ➥ './app.component',
 ➥ ...
 ➥ function (module, exports, require) {
                    var define = undefined;
                …
 npm-angular5-provider
     build/resources/main/META-INF/resources
         package.json
            {
                "name": "npm-angular5-provider",
                "version": "1.0.0",
                "main": "bootstrap.js",
                "dependencies": {
                    "@npm-angular5-provider$angular/animations": "^5.0.0",
                    "@npm-angular5-provider$angular/cdk": "^5.0.0",
                    "@npm-angular5-provider$angular/common": "^5.0.0",
                    "@npm-angular5-provider$angular/compiler": "^5.0.0",
                    …
                }
                …
            }
         bootstrap.js
            Liferay.Loader.define(
➥ 'npm-angular5-provider@1.0.0/bootstrap',
➥ ['module', 'exports', 'require',
➥ 'npm-angular5-provider$core-js/es6/reflect',
➥ ...
➥ function (module, exports, require) {
                var define = undefined;
                /**
                * This file includes polyfills needed by Angular and must be loaded before the app.
                …  
                require('npm-angular5-provider$core-js/es6/reflect');
                require('npm-angular5-provider$core-js/es7/reflect');
                …
                require('npm-angular5-provider$zone.js/dist/zone');
                …
            }
        …
         node_modules/npm-angular5-provider$core-js@2.5.7
             index.js
                Liferay.Loader.define(
➥ 'npm-angular5-provider$core-js@2.5.7/index',
➥ ['module', 'exports', 'require',
➥ './shim',
➥ ...
➥ function (module, exports, require) {
                    var define = undefined;
                    require('./shim');
            …
        …

Take a look at the output of npm-angular5-provider. As you can see, the bundler has copied the project and node_modules' JS files to the output and has wrapped them inside a Liferay.Loader.define() call so that the Liferay AMD Loader know how to handle them. Also, the module names in require() calls and inside the Liferay.Loader.define() dependencies array have been namespaced with the npm-angular5-provider$ prefix to achieve dependency isolation.

You may also have noted the var define = undefined; addition to the top of the file. This is introduced by liferay-npm-bundler to make the module think that it is inside a CommonJS environment (instead of an AMD one). This is because some npm packages are written in UMD format and, because we are wrapping it inside our AMD define() call, we don't want them to execute their own define() but prefer them to take the CommonJS path, where the exports are done through the module.exports global.

We have said that liferay-npm-bundler added these modifications but, to be fair, the real responsible is babel-plugin-wrap-modules-amd, a Babel plugin that is executed by Babel when it is invoked from the liferay-npm-bundler in one of its build phases.

If you are curious on how that plugin is configured, take a look at the default preset used by liferay-npm-bundler where the liferay-standard Babel preset is referenced which, in turn, configures the babel-plugin-wrap-modules-amd plugin.

Now, let's look at the package.json file and notice how the dependencies have been namespaced too. This is necessary to make the namespaced define() and require() calls work inside the JS modules, and it is done by the liferay-npm-bundler-plugin-namespace-packages plugin, configured here.

There are more plugins involved in the build that serve miscellaneous purposes. You can check their descriptions and use in the Liferay Docs.

Now let's see how the bundler has modified npm-angular5-portlet-say-hello. In this case we will only pay attention to the changes made in two files, because the rest is more or less the same as with the npm-angular5-provider.

First of all, the angular-loader.ts file has been converted to angular.loader.js. This has happened in two steps:

  1. The Typescript compiler transpiled angular-loader.ts to angular.loader.js generating a CommonJS module written in ECMAscript 5.
  2. The bundler then wrapped that code inside a Liferay.Loader.define() call to make it executable inside Liferay AMD Loader.

But more important: the module is importing Angular modules like @angular/platform-browser-dynamic which the bundler usually namespaces with the bundle's name (in this case npm-angular5-portlet-say-hello) but, because we are importing them from npm-angular5-provider, they have been namespaced with npm-angular5-provider$ instead so that they are loaded from that bundle at runtime (by the Liferay AMD Loader).

Finally, if you look at the dependencies inside the package.json file you will notice that the bundler has injected the ones pertaining to npm-angular5-provider to make them available at runtime.

And that's it. Shall this huge and boring series of articles help you understand how the new bundler works and how to leverage it to deploy your most exciting portlets.

Have fun coding!

Ivan Zaera 2018-06-08T08:15:24Z
Categories: CMS, ECM

Customer Success Is the Key to Digital Transformation

Liferay - Thu, 06/07/2018 - 15:50

"Move fast and break things." Facebook's old motto has become an unintended slogan for the burgeoning tech culture and economy. One need look no further than the scandals swirling around so many truly innovative and disruptive companies to see how fast we've moved and how much we've broken. That's why at Gainsight our motto is "human-first." We make software for customer success.

Customer Success is a growing global movement in business—and not just software. It's not specifically an organization, department or job function, although many companies have "Customer Success Teams," "Customer Success Managers (CSMs)" and even "Chief Customer Officers (CCOs)." Many companies have sophisticated processes and tools for customer success. But all companies understand that the only way to keep your customers, expand their relationships with your company and attract new ones is to make sure they're getting what they pay for.

In other words, customer success is about ensuring your clients achieve their desired outcome with your product or service.

No digital transformation effort will be successful if it doesn't include that principle at its core.

Digital Transformation and the Subscription Economy

The global economy is in the middle stages of a fundamental shift away from one-time purchasing and perpetual licensing. Software is being infused into industries like manufacturing, agriculture, healthcare, services - everywhere. That's the basis for the digital transformation imperative causing upheaval in so many companies. And software is predominantly sold on a subscription basis.

But it's not just software. The subscription economy has come for our food, our clothes, even our shaving cream.

Why, though? It isn't as if there's a growing global demand for recurring payments. It's because, for the first time in the history of the modern economy, supply is greater than demand.

Supply > Demand

Throughout the 20th century, limitations in natural resources, means of production and other fundamental restrictions on how much stuff you can make and sell at maximum capacity created a natural balance between supply and demand. People want n widgets, Company A is capable of producing x widgets. The scarcest resource determines the cost and the supply.

But what happens when the fundamental unit of economic production is infinitely replicable? What happens when the widget is digital?

In the age of digital transformation, subscription economy and supply > demand, the scarcest resource isn't any raw material or finished goods. The scarcest resource is the customer.

And that means the key to success in the 21st century is protecting, nurturing, and growing your customer base. In other words, the key to your success is your customer's success.

Customer Success = Customer Outcomes + Customer Experience

Customers do not renew, expand or advocate if they are not achieving their desired outcome. But the quality of their interactions with your company will affect their level of success—and therefore their retention as well.

If customers are happy, it doesn't necessarily mean they'll renew. That's why it's called "customer success" and not "customer delight." But we do know that sentiment and satisfaction are correlated to the lifetime value of the customer. The economic imperative for maximizing that value is clear and was probably the impetus for your company's digital transformation.

But without a unified strategy for ensuring every customer a) achieves their desired outcome and b) has a great experience with your company, your customers' lifetime value will be limited.

Customer Success Is a Company-wide Imperative.

When you think about your customer's journey with your product or service, they interact constantly with different stakeholders throughout your organization. They'll have several "touchpoints" with marketers before they ever make a transaction. There will be interactions with salespeople, consultants, trainers, technical support, maybe even a CSM and a host of other people depending on the amount of touch you use with a typical client or segment of clients. And then there's the interaction with the product itself!

Your company has an opportunity to contribute to a customer's success at every touchpoint and every time they login to your product. But you can't leave the consistency or quality of those touchpoints up to chance. Read more in Gainsight's Guide to Company-Wide Customer Success.

The Periodic Table of Customer Success Elements

There's a science to customer success. Gainsight has implemented customer success people, process and technology at more than 500 companies, and we've learned a lot over the years about the best ways to make sure customers are successful.

There's too much to get into in this one blog, but we've identified 16 elements of a functional, company-wide customer success strategy. Now that you know the "why" of customer success, it's time to learn the "how."

Human-First Business

At the beginning of this blog, I said, "disruption needs disrupting." The word "disruption" has become synonymous with a software solution to a human problem. And humans do have weaknesses that technology can fix or make stronger. Most of us aren't great drivers. A lot of us could use some help with dating. The Hyperloop looks awesome.

But business? Business is about people. As you complete your digital transformation, your business will still be about people—only even more so. Your customers are your scarcest resource, and they are entirely comprised of humans!

If there's one mission-critical axiom for digital transformation, it's this: make customer success a cornerstone of your go-to-market plan.

But if there's two, then: never forget that underneath the data points and predictive algorithms are human beings—and that's what's at the heart of customer success.

Leverage Technology to Drive Customer Engagement

Existing customers are cheaper to market to than new ones and spend significantly more on products and services. Yet they often fall by the wayside as companies look to add new logos. Understand how to build deeper relationships with your existing customers and drive brand loyalty by leveraging new technologies in analytics and digital experience.

Read “Engage Existing Customers: Four Key Strategies”   Christine Reyes 2018-06-07T20:50:54Z
Categories: CMS, ECM

Why we need a new liferay-npm-bundler (2 of 3)

Liferay - Thu, 06/07/2018 - 02:12
How we are fixing the problems learned in bundler 2.x

This is the second of a three articles series motivating and explaining the enhancements we have done to Liferay's npm bundler. You can read the first article to find out the motivation for taking the actions explained in this one.

To solve the problems we saw in the previous article we followed this process:

  1. Go one step back in deduplication and behave like standard bundlers (webpack, Browserify, etc.).
  2. Fix peer dependencies support.
  3. Provide a way to deduplicate modules.

Step 1 gets us to a point where we have repeatable builds that mimick what the developer has in his node_modules folder when the projects are deployed to the server, and run in the browser. We implement it by isolating project dependencies (namespacing the packages) so that the dependencies of each project are isolated between them.

With that done, we get 2 nearly for free, because we just need to inject virtual dependencies in package.json files when peer dependencies are used. But this time we know which version constraints to use, because the project's whole dependency tree is isolated from other projects. That is, we have something similar to a node_modules folder available in the browser for each project.

Finally, because we have lost deduplication with steps 1 and 2 and that leads to a solution which is equivalent to standard bundlers, we define a way to deduplicate packages manually. This new way of deduplication is not automatic but leads to full control (during build time) of how each package is resolved.

Let's see the steps in more detail...

Isolation of dependencies

To achieve the isolation the new bundler just prefixes each package name with the name of the project and rewrites all the references in the JS files. For example, say you have our old beloved my-portlet project:

 package.json
    {
        "name": "my-portlet",
        "version": "1.0.0",
        "dependencies": {
            "isarray": "^1.0.0"
        }
    }
 node_modules/isarray
     package.json
        {
            "name": "isarray",
            "version": "1.0.1",
            "main": "index.js"
        }
 META-INF/resources
     view.jsp
        <aui:script require="my-portlet@1.0.0/js/main"/>
     js
         main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });

When we bundle it with bundler 1.x we get something like:

 META-INF/resources
     package.json
        {
            "name": "my-portlet",
            "version": "1.0.0",
            "dependencies": {
                "isarray": "^1.0.0"
            }
        }
     view.jsp
        <aui:script require="my-portlet@1.0.0/js/main"/>
     js
         main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });
     node_modules/isarray
         package.json
            {
                "name": "isarray",
                "version": "1.0.1",
                "main": "index.js"
            }

But if we use bundler 2.x it changes to:

 META-INF/resources
     package.json
        {
            "name": "my-portlet",
            "version": "1.0.0",
            "dependencies": {
                "my-portlet$isarray": "^1.0.0"
            }
        }
     view.jsp
        <aui:script require="my-portlet@1.0.0/js/main"/>
     js
         main.js
            require('my-portlet$isarray', function(isarray) {
                console.log(isarray([]));
            });
     node_modules/my-portlet$isarray
         package.json
            {
                "name": "my-portlet$isarray",
                "version": "1.0.1",
                "main": "index.js"
            }

As you see, we just needed to prepend my-portlet$ to each dependency package name and that way, each project will load its own dependencies and won't collide with any other project. Easy.

If we did now the same test of deploying my-portlet and his-portlet, each one would get its own versions simply because we have two different isarray packages: one called my-portlet$isarray and another called his-portlet$isarray.

Peer dependency support

Because we have isolated dependencies per portlet, we can now honor peer dependencies perfectly. For example, remember the Diverted peer dependencies section in the previous article: with bundler 1.x, there was only one a-library package available for everybody. But with the new namespacing technique, we have two a-librarys: my-portlet$a-library and his-portlet$a-library.

Thus, we can resolve peer dependencies exactly as stated in both projects because their names are prefixed with the project's name:

my-portlet@1.0.0 ➥ my-portlet$a-library 1.0.0 ➥ my-portlet$a-helper 1.0.0 his-portlet@1.0.0 ➥ his-portlet$a-library 1.0.0 ➥ his-portlet$a-helper 1.2.0

And in this case, my-portlet$a-library will depend on a-helper at version 1.0.0 (which is namespaced as my-portlet$a-helper) and his-portlet$a-library will depend on a-helper at version 1.2.0 (which is namespaced ashis-portlet$a-helper).

How does all this magic happen? Easy: we have just created a new bundler plugin named liferay-npm-bundler-plugin-inject-peer-dependencies that scans all JS modules for require calls and injects a virtual dependency in the package.json file when a module from an undeclared package is required.

So, for example, let's say you have the following project:

 META-INF/resources
     package.json
        {
            "name": "my-portlet",
            "version": "1.0.0",
            "dependencies": {
                "isarray": "^1.0.0"
            }
        }
     js
         main.js
            require(['isarray', 'isobject', function(isarray, isobject) {
                console.log(isarray([]));
                console.log(isobject([]));
            });
     node_modules
         isarray
             package.json
                {
                    "name": "isarray",
                    "version": "1.0.1",
                    "main": "index.js"
                }
         isobject
             package.json
                {
                    "name": "isobject",
                    "version": "1.1.0",
                    "main": "index.js"
                }

As you can see, there's no dependency to isobject in the package.json file.

However, if we run the project through the bundler configured with the inject-peer-dependencies plugin, it will find out that main.js is requiring isobject and will resolve it to version 1.1.0 which can be found in the node_modules folder.

After that, the plugin will inject a new dependency in the output package.json so that it looks like this:

{ "name": "my-portlet", "version": "1.0.0", "dependencies": { "isarray": "^1.0.0", "isobject": "1.1.0" } }

Note how, being an injected dependency, isobject's version constraints are its specific version number, without caret or any other semantic version operator. This makes sense, as we want to honor the exact peer dependency found in the project and thus we cannot inject a more relaxed semantic version expression because it could lead to unstable results.

Also keep in mind that these transformations are made in the output files (the ones in your build directory), not on your original source files.

Deduplication of packages (imports)

As we said before, the problem with namespacing is that each portlet is getting its own dependencies and we don't deduplicate any more. If we always used the bundler this way, it won't make too much sense, because we could obtain the same functionality with standard bundlers like webpack or Browserify and wouldn't need to rely on a specific tool like liferay-npm-bundler.

But, being Liferay a portlet based architecture, it would be quite useful if we could share dependencies among different portlets. That way, if a page is composed of five portlets using jQuery, only one copy of jQuery would be loaded by the JS interpreter to be used by the five different portlets.

With bundler 1.x that deduplication was made automagically, but we had no control over it. However, with version 2.x, we may now import packages from an external OSGi bundle, instead of using our own. That way, we can put shared dependencies in one project, and reference them from the rest.

Let's see an example: imagine that you have three portlets that use our favorite Non Existing Wonderful UI Components framework (WUI). Suppose this quite limited framework is composed of 3 packages:

  1. component-core
  2. button
  3. textfield

Now, say that we have these three portlets:

  1. my-toolbar
  2. my-menu
  3. my-content

Which we use to compose the home page of our site. And the three depend on the WUI framework.

If we just use the bundler to create three OSGi bundles, each one will package a copy of WUI inside it and use it when the page is rendered, thus leading to a page where your browser loads three different copies of WUI in the JS interpreter.

To avoid that, we will create a fourth bundle where WUI is packaged and import the WUI packages from the previous three bundles. This will lead to an structure like the following:

 my-toolbar
     .npmbundlerrc
        {
            "config": {
                "imports": {
                    "wui-provider": {
                        "component-core": "^1.0.0",
                        "button": "^1.0.0",
                        "textfield": "^1.0.0"
                    }
                }
            }
        }
 my-menu
     .npmbundlerrc
        {
            "config": {
                "imports": {
                    "wui-provider": {
                        "component-core": "^1.0.0",
                        "button": "^1.0.0",
                        "textfield": "^1.0.0"
                    }
                }
            }
        }
 my-content
     .npmbundlerrc
        {
            "config": {
                "imports": {
                    "wui-provider": {
                        "component-core": "^1.0.0",
                        "button": "^1.0.0",
                        "textfield": "^1.0.0"
                    }
                }
            }
        }
 wui-provider
     package.json
        {
            "name": "wui-provider",
            "dependencies": {
                "component-core": "^1.0.0",
                "button": "^1.0.0",
                "textfield": "^1.0.0"
            }
        }

As you can see, the three portlets declare the WUI imports in the .npmbundlerrc file. They would probably also be declared in the bundles' package.json files though they can be omitted too and it will still work in runtime because they are being imported.

So, how does this work? The key concept behind imports is that we switch the namespace of certain packages thus pointing them to an external bundle.

So, say that you have the following code in my-toolbar portlet:

var Button = require('button');

This would be transformed to the following when run through the bundler unconfigured:

var Button = require('my-toolbar$button');

But, because we are saying that button is imported from wui-provider, it will be changed to:

var Button = require('wui-provider$button');

And also, a dependency on wui-provider$button at version ^1.0.0 will be introduced in my-toolbar's package.json file so that the loader may look for the correct version.

And that's enough, because once we require wui-provider$button at runtime, we will jump to wui-provider's context and load the subdependencies from there on, even if we are executing code from my-toolbar.

If you give it a thought, that's logical because wui-provider's modules are namespaced too and once you load a module from it, it will keep requiring wui-provider$ prefixed modules all they way down.

So, that's pretty much the motivation and implementation of bundler 2.x. Hope it will shed some light on why we needed these changes and how we are now founding the npm SDK on much more stable roots.

You can now read the last article of the series analyzing a real life example of using bundler 2.0 within an Angular portlet.

Ivan Zaera 2018-06-07T07:12:39Z
Categories: CMS, ECM

Transformando os Serviços Públicos Através da Experiência do Cliente

Liferay - Wed, 06/06/2018 - 15:59

Serviços digitais são uma maneira de demonstrar que órgãos públicos podem prover um serviço de qualidade e ao mesmo tempo abrir caminhos para futuras interações com os cidadãos. Apenas 46% dos órgãos federais e 26% dos estaduais brasileiros oferecem a realização de agendamentos para consultas, atendimentos e serviços através de websites, por exemplo. A presença online deve aproximar as pessoas de todos os serviços e informações disponibilizados nessa esfera.

Como a Tecnologia Transforma e Conecta as Experiências do Cliente A tecnologia permite que Governo e Cidadão estejam cada vez mais próximos, principalmente no mundo online. Com isso, surgem uma série de preocupações visando a prestação de um melhor serviço ao público e usuários internos, além do funcionamento dos sistemas dentro das instituições. Atualmente, menos da metade dos órgãos federais e estaduais e menos de 25% das prefeituras contam com portais responsivos. Algumas ações já estão sendo feitas pelo governo brasileiro com o intuito de oferecer uma experiência do cliente relevante e melhor. Mas, como é possível prover uma melhor experiência ao cidadão cada vez mais conectado, de maneira prática e rápida?

Apresentamos a seguir 3 maneiras que a tecnologia pode transformar a maneira que o governo entrega experiências:

1. Automação e Autoatendimento

Em primeiro lugar, uma experiência do cliente de qualidade concede liberdade para os cidadãos guiarem sua própria jornada. Ao invés de ligar ou visitar fisicamente uma agência, uma poderosa plataforma de experiência digital concede ao cidadão a autonomia para encontrar informações, realizar pagamentos e interagir com o time atendimento ao cliente de forma mais eficiente para ambas as partes.

2. Transformação de Processos

Adotar uma estratégia digital também atinge processos antigos. Com um CMS completo apoiando sua plataforma de experiência digital, as instituições públicas podem abandonar as diversas atividades e registros realizados apenas com papel ofício. Silos entre departamentos também podem ser superados com a melhoria do gerenciamento de documentos, calendários, fluxos de trabalho e ferramentas de colaboração.

3. Redução de Custos

Automação, autoatendimento e escritórios sem papel são excelentes opções de economia de tempo e dinheiro. Automatizar a jornada do cliente significa alocar sua equipe em outras atividades e diminuir os investimentos feitos para o gerenciamento de dados.

Os Recursos de Tecnologia Necessários

Sabemos por que serviços públicos e instituições governamentais precisam se adaptar à nova realidade digital, mas isso leva à uma pergunta: que tipo de tecnologia pode tornar possível todos os benefícios citados acima? Para ser considerada uma plataforma de experiência digital (DXP), sua plataforma deverá contar com um completo sistema de gerenciamento de conteúdo (CMS), gerenciamento de documento e workflows, além de contar com funcionalidades para o desenvolvimento de aplicativos móveis. Mas esses são apenas os aspectos iniciais. Para a verdadeira transformação de órgãos governamentais tradicionais em hubs de Governo as a Service (GaaS), aqui estão 3 importantes atributos tecnológicos para levar em consideração ao escolher sua próxima plataforma de experiência digital:

1. Suporte Omnichannel

A plataforma de experiência digital ideal é capaz de prover comunicação e transações omnichannel. Isso significa que seus clientes poderão acessar portais do governo através de desktops, notebooks, smartphones, tablets ou qualquer outro dispositivo.

2. Funcionalidades de Personalização

Quando os clientes de fato interagem com os sistemas do governo - não importa o canal - eles precisam ser atendidos de forma personalizada. Para tornar isso possível, o DXP deverá armazenar nome, detalhes e preferências de cada cliente para entregar uma jornada do cliente satisfatória e que seja personalizada na medida certa.

3. Integração

Para de fato conectar as experiências da sua audiência, a tecnologia DXP deve integrar-se com sistemas legados e qualquer canal e aplicativos novos que aparecerão nos próximos anos. A arquitetura de microserviços ajuda a tornar isso possível, além de proteger-se do uso de tecnologias que possam deixar a desejar no futuro. Essencialmente, a arquitetura de microserviços implica no desenvolvimento de softwares como um conjunto de serviços com deploy independente, pequenos, com serviços modulares onde cada serviço conta com um processo único e que se comunicam através de um mecanismo leve e bem definido para atender aos objetivos de negócio.

Caso de Sucesso: IplanRio - Carioca Digital

 

A IplanRio é a empresa responsável por prover serviços de Tecnologia da Informação e Comunicação (TIC) aos órgãos municipais do Rio de Janeiro. Além disso, deve atender as necessidades de uma população superior a 6 milhões de habitantes. A Prefeitura do Rio de Janeiro uniu os poderes estaduais e municipais com o intuito de desburocratizar e modernizar alguns serviços online, até então disponíveis apenas em agências físicas, através do portal Carioca Digital. 


Entre os diversos serviços prontamente disponibilizados pelo Carioca Digital, estão a visualização do prontuário eletrônico de saúde, informações de imóveis (IPTU), a agenda cultural da cidade, o saldo de créditos de Nota Fiscal Carioca e abrir chamados para atendimento no 1746, Central de Atendimento ao Cidadão. Além de alguns serviços do Detran/RJ também disponíveis online através da plataforma, antes apenas possíveis em agências físicas. “A ferramenta traz, de forma customizada, o acesso a informações ao usuário cadastrado, a partir da inscrição do CPF e da indicação da geolocalização. É uma forma da prefeitura dialogar constantemente com o cidadão.” Fernando Ivo, Assessor IplanRio. Como resultado, houve um aumento significativo na satisfação dos usuários e o número de acessos mensais aos dados do Carioca Digital subiu 30%.
 

Você pode encontrar mais informações sobre o Carioca Digital aqui.

A Nova Era de Engajamento com o Governo

Para estar em condições de enfrentar um futuro omnichannel, as instituições públicas devem primeiro adotar a realidade da estratégia digital como a única estratégia - e, em seguida, colocar em prática as ações em direção à tecnologia que possam suportar o peso dessa era digital nova e em rápida evolução.

   

Deseja entregar experiências de cliente mais poderosas, personalizadas e econômicas para cidadãos e empresas locais? Confira o whitepaper:

Quatro Estratégias para Transformar a Experiência do Seu Cliente.   Luciano Demery 2018-06-06T20:59:26Z
Categories: CMS, ECM

Why we need a new liferay-npm-bundler (1 of 3)

Liferay - Tue, 06/05/2018 - 10:12
What is the problem with bundler 1.x

This is the first of a three articles series motivating and explaining the enhancements we have done to Liferay's npm bundler. You can learn more about it in its first release blog post.

How bundler 1.x works

As you all know, the bundler lets you package your JS files and npm packages inside Liferay OSGi bundles so that they can be used from portlets. The key feature is that you may use a standard npm development workflow and it will work out of the box without any need for complex deployments or setups.

To make its magic, the bundler grabs all your npm dependencies, puts them inside your OSGi bundle and transforms them as needed to be run inside portlets. Among these transformations, one of the most important is converting from CommonJS to AMD, because Liferay uses an AMD compliant loader to manage JS modules.

Once your OSGi bundle is deployed, every time a user visits one of your pages, the loader gets information about all deployed npm packages, resolves your dependency tree and loads all needed JS modules.

For example: say you have a portlet named my-portlet that once started loads a JS module called my-portlet/js/main and ultimately, that module depends on isarray npm package to do its job. That would lead to a project containing these files (among others):

package.json
    {
        "name": "my-portlet",
        "version": "1.0.0",
        "dependencies": {
            "isarray": "^1.0.0"
        }
    }
node_modules/isarray
    package.json
        {
           "name": "isarray",
           "version": "1.0.1",
           "main": "index.js"
        }
META-INF/resources
    view.jsp
        <aui:script require="my-portlet@1.0.0/js/main"/>
    js
        main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });

Whenever you hit my-portlet's view page the loader looks for the my-portlet@1.0.0/js/main JS module and loads it. That causes main.js to be executed and when the require call is executed (note that it is the AMD require, not the CommonJS one) the loader gets information from the server, which has scanned package.json, to determine the version number of isarray package and find a suitable version among all those deployed. In this case, if only your bundle is deployed to the portal, main.js will get isarray@1.0.1, which is the version bundled inside your OSGi JAR file.

What if we deploy two portlets with shared dependencies

Now imagine that one of your colleagues creates a portlet named his-portlet which is identical to my-portlet, but because he developed it later, it bundles isarray at version 1.2.0 instead of 1.0.1. That would lead to a project containing these files (among others):

package.json
    {
        "name": "his-portlet",
        "version": "1.0.0",
        "dependencies": {
            "isarray": "^1.0.0"
        }
    }
node_modules/isarray
    package.json
        {
            "name": "isarray",
            "version": "1.2.0",
            "main": "index.js"
        }
META-INF/resources
    view.jsp
        <aui:script require="his-portlet@1.0.0/js/main"/>
    js
        main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });

In this case, whenever you hit his-portlet's view page the loader looks as before for the his-portlet@1.0.0/js/main JS module and loads it. Then the require call is executed and the loader finds a suitable version. But now something has changed because we have two versions of isarray deployed in the server:

  • 1.0.1 bundled inside my-portlet
  • 1.2.0 bundled inside his-portlet

So, which one is the loader giving to his-portlet@1.0.0/js/main? As we said, it gives the best suitable version among all deployed. That means the newest version satisfying the semantic version constraints declared in package.json. And, for the case of his-porlet that is version 1.2.0 because it satisfies semantic version constraint ^1.0.0.

Looks like everything is working like with my-porlet, doesn't it? Well, not really. Let's look at my-portlet again, now that we have two versions of isarray. In my-portlet's package.json file the semantic version constraint for isarray is ^1.0.0 too, so, what will it get?

Of course: version 1.2.0 of isarray. That is because 1.2.0 better satisfies ^1.0.0 than 1.0.1 and, in fact, it's similar to what npm would do if you rerun npm install in my-portlet as it will find a newer version in http://npmjs.com and will update it.

Also, this strategy will lead to deduplication of the isarray package and if both my-portlet and his-portlet are placed in the same page, only one copy of isarray will be loaded in the JS interpreter.

But that's perfect! What's the problem, then?

Although this looks quite nice, it has some drawbacks. One is already seen in the example: the developer of my-portlet was using isarray@1.0.1 in his local development copy when he bundled it. That means that all tests were done with that version. But then, because a colleague deployed another portlet with an updated isarray his bundle changed and decided to use a different version which, even if it is declared semantically compatible, may lead to unexpected behaviours.

Not only that, but the fact that version 1.0.1 or 1.2.0 is loaded for my-portlet is not decided in any way by the developer of my-portlet and changes depending on what is being deployed in the server.

Those drawbacks are very easy to spot, but if we look in depth, we can find two subtler problems that may lead to unstabilities and hard to diagnose bugs.

Transitive dependencies shake

Because the bundler 1.x solution decides to perform aggressive semantic version resolution, the dependencies graph of any project may lead to unexpected results depending on how semantic version constraints are declared. This is specially important for what I call framework packages, as opposed to library packages. This is not a formal definition, but I refer to framework packages when using npm packages that are supposed to call the project's code, while library packages are supposed to be called from the project's code.

When using library packages, a switch of version is not so bad, because it usually leads to using a newer (and thus more stable) version. That's the case of the isarray example above.

But when using frameworks, you usually have a bunch of packages that are supposed to cooperate together and be in specific versions. That, of course, depends on how the framework is structured and may not hold true for all of them, but it is definitely easier to have problems in a dependency graph where some subset of packages are tightly coupled than in one where every package is isolated and doesn't worry too much about the others.

Let's see an example of what I'm referring to: imagine you have a project using the Non Existing Wonderful UI Components framework (let's call it WUI). That framework is composed of 3 packages:

  1. component-core
  2. button
  3. textfield

Packages 2 and 3 depend on 1. And suppose that package 1 has a class called Widget from which Button (in package 2) and TextField (in package 3) extend. This is a usual widget based UI pattern, you get the idea. Now, let's suppose that Widget has this check somewhere in its code:

Widget.sayHelloIfYouAreAWidget = function(widget) {     if (widget instanceof Widget) {         console.log('Hey, I am a widget, that is wonderful!');     } };

The function tests if some object is extending from Widget by looking at its prototype and says something if it holds true.

Now, say that we have two portlet projects again: my-portlet and his-portlet (not the ones we were using above, but two new portlet projects that use WUI) and their dependencies are set like this:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ textfield 1.2.0 ➥ component-core 1.2.0 his-portlet@1.0.0 ➥ button 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0

In addition, the dependencies of button and textfield are set like this:

button@1.0.0 ➥ component-core ^1.0.0 button@1.5.0 ➥ component-core ^1.0.0 textfield@1.2.0 ➥ component-core ^1.0.0 textfield@1.5.0 ➥ component-core ^1.0.0

If the two portlets are created at different times, depending on what is available at http://npmjs.com, you may get the following versions after npm install:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ component-core 1.2.0 ➥ textfield 1.2.0 ➥ component-core 1.2.0 ➥ component-core 1.2.0 his-portlet@1.0.0 ➥ button 1.5.0 ➥ component-core 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0 ➥ component-core 1.5.0

This assumes that the latest version of component-core available when npm install was run in my-portlet was 1.2.0, but then it was updated and by the time that his-portlet ran npm install the latest version was 1.5.0.

What happens when we deploy my-portlet and his-portlet?

Because the platform will do aggressive deduplication you will get the following dependency graphs:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ component-core 1.5.0 (✨ note that it gets 1.5.0 because `his-portlet` is providing it) ➥ textfield 1.2.0 ➥ component-core 1.5.0 (✨ note that it gets 1.5.0 because `his-portlet` is providing it) ➥ component-core 1.2.0 (✨ note that the project gets 1.2.0 because it explicitly asked for it) his-portlet@1.0.0 ➥ button 1.5.0 ➥ component-core 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0 ➥ component-core 1.5.0

We are almost there. Now imagine that both my-portlet and his-portlet do this:

var btn = new Button(); Widget.sayHelloIfYouAreAWidget(btn);

Will it work as expected in both portlets? As you may have guessed, the answer is no. It will definitely work in his-portlet but in the case of my-portlet the call to Widget.sayHelloIfYouAreAWidget won't print anything because the instanceof check will be testing a Button that extends from Widget at component-core@1.5.0 against Widget at component-core@1.2.0 (because the project code is using that version, not 1.5.0) and thus will fail.

I know this is a fairly complicated (and maybe unstable) setup that can ultimately be fixed by tweaking the framework dependencies or changing the code, but it is definitely a possible one. Not only that, but there's no way for a developer to know what is happening until he deploys the portlets and, even if a certain combination of portlets works now, it could fail after if a new portlet is deployed.

On the contrary, in a scenario where the developer was using a standard bundler like webpack or Browserify the final build would be predictable for both portlets and would work as expected, each one loading its own dependencies. The drawback would be that with standard bundlers there's no way to deduplicate and share dependencies between them.

Diverted peer dependencies

Let's see another case where the bundler 1.x cannot satisfy the build expectations. This time it's with peer dependencies. We will again use two projects named my-portlet and his-portlet with the following dependencies:

my-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-helper 1.0.0 his-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-helper 1.2.0

At the same time, we know that a-library has a peer dependency on helper ^1.0.0. That is:

a-library@1.0.0 ➥ [peer] a-helper ^1.0.0

So, in both projects, the peer dependency is satisfied, as both a-helper 1.0.0 (in my-portlet) and 1.2.0 (in his-portlet) satisfy a-library's semantic version constraint ^1.0.0.

But now, what happens when we deploy both portlets to the server? Because the platform aggressively deduplicates, there will only be one a-library package in the system making it impossible that it depends on a-helper 1.0.0 and 1.2.0 at the same time. So the most rational decision -probably- is to make it depend on a-helper 1.2.0.

That looks OK for this case as we are satisfying semantic version constraints correctly, but we are again changing the build at runtime without any control on the developer side and that can lead to unexpected results.

However, there's a subtler scenario where the bundler doesn't know how to satisfy peer dependencies and it's when peer dependencies appear in a transitive path.

So, for example, say that we have these dependencies:

my-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-sub-helper 2.0.0 a-library@1.0.0 ➥ [peer] a-helper >=1.0.0 a-helper@1.0.0 ➥ a-sub-helper 1.0.0

Now suppose that a-library requires a-sub-helper in one of its modules. In this case, when run in Node.js, a-library receives a-sub-helper at version 2.0.0, not 1.0.0. That's because it doesn't matter that a-library peerdepends on a-helper to resolve a-sub-helper, but a-sub-helper is simply resolved from the root of the project because a-library is not declaring it as a dependency, but just relying on a-helper to provide it.

But this cannot be reproduced in Liferay, because it needs to know the semantic version constraints of each dependency package as it doesn't have any node_modules where to look up for packages. We could fix it by injecting an extra dependency in a-library's package.json to a-sub-helper 2.0.0 but that would work for this project, not for all projects deployed in the server. That is because, as we saw in the previous deployment in this same section, there's only one a-library package for everybody, but at the same time we can have several projects where a-sub-helper resolves to a different version when required from a-library.

In fact, we used this technique for Angular peer dependencies by means of the liferay-npm-bundler-plugin-inject-angular-dependencies plugin and it used to work if you only deployed one version of Angular. But things became muddier if you deployed several ones.

For these reasons, we needed a better model where the whole solution could be grown in the future. And that need led to bundler 2.x where we have -hopefully- created a solid foundation for future development of the bundler.

If you liked this, head on to the next article where we explain how bundler 2.x addresses these issues.

Ivan Zaera 2018-06-05T15:12:26Z
Categories: CMS, ECM

Do your own Analytics in Liferay with Elastic Search and Kibana

Liferay - Thu, 05/31/2018 - 18:17

Liferay integrates out of the box with Piwik and Google Analytics.

However, doing analytics with ElasticSearch, Logstash and Kibana, is not much harder:


https://youtu.be/os5gqpnC0GA

How to do it?
Easy

First, we need to get the data:

Offloading work to the user browsers using Ajax seems to be the most logical way to do it (imagine hundreds of concurrent users clicking on things and moving their mouse at the same time), to not affect our server's performance. That's how Piwik, Google Analytics, and Omniture work.

Something similar to this (https://raw.githubusercontent.com/roclas/analytics-storage-server/logstash/javascript_data_collection/hover_and_clicks.js) could do the job.

It would also make sense to not give the burden of receiving these Ajax requests to our Liferay server. Our application server is a bit heavy, and something smaller (and more scalable) would do a better job for this simple task.


What about a pool of threads that work asynchronously?

This project (https://github.com/roclas/analytics-storage-server/tree/filesystem) basically is that. A pool of asynchronous threads that act as an HTTP server. It is scalable and fast. It could get all our Ajax events and store them in files (so that they can be later analyzed in batch using machine learning, big data, etc).

 

What about visualization? Where is the data analysis here?

 

This is where the second part of our problem starts; we are able to collect the data, but now we have to analyze it and show graphs and pie charts.

In this other branch (https://github.com/roclas/analytics-storage-server/tree/logstash) what the server is doing is piping all the events into Logstash, which will insert them into Elastic Search.


Once our data is in Elastic Search, we just have to point Kibana to our Index and start creating dashboards and playing with charts.

Only if you are interested in the details, this video shows how everything works more in detail (it is also probably a bit too long and boring): https://youtu.be/NMPWR2vdnio 

 

Carlos Hernandez 2018-05-31T23:17:02Z
Categories: CMS, ECM

Liferay Portal 7.1 Beta 2 Release

Liferay - Tue, 05/29/2018 - 11:58
I'm pleased to announce the immediate availability of: Liferay Portal 7.1 Beta 2
 
  Download Now: Fixed in Beta 2

New Features Summary

Modern Site Building: Liferay 7.1 introduces a new way of adding content.  Fragments allows a content author to create content in small reusable pieces.  Fragments can be edited in real time or can be exported and managed with the tooling of your choice.  Use page templates from within a site and have complete control over the layout of your content pages.  Navigation menus now give you complete control over site navigation.  Create site navigation in new and interesting ways and have full control over the navigations visual presentation.        Forms Experience: Liferay 7.1 includes a completely revamped forms experience.  Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form.  Using the new Element Sets, form creators can now create groups of reusable components.  Forms fields can now be translated into any language using any Liferay locale and can also be easily duplicated. 
    Redesigned System Settings: System Settings has received a complete overhaul.  Configurations have been logically grouped together making it easier than every before to find what's configurable.  Several options that were located on Server Administration have also been moved to System Settings.     User Administration: User account from has been completely redesigned.  Each form section can now be saved independently of each other minimizing the chance of losing changes.  The new ScreensNavigationEntry let's developers add any form they want to user administration.     Improvements to Blogs and Forums:  Blog readers a can now un-subscribe to notifications via email. Friendly URLs used to be generated based on the entries title. Authors now have complete control over the friendly URL of the entry.   Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.     Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch.  Message boards users can now attach as many files as they want by dragging and dropping them in a post.  Message boards also has had many visual updates.     Workflow Improvements: Workflow has received a complete UI overhaul.  All workflow configuration is now consolidated under one area in the Control Panel.  Workflow definitions are now versioned and previous versions can now be restored.  Workflow definitions can now be saved in draft form and published live when they are ready.     Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 and the inclusion of Tomcat 9.0.  At the time of this release  JDK 8 is still the only supported JDK.  
Documentation Documentation for Liferay 7.1 is well underway.  Many sections have already been completed in the Deployment and Development Sections.  For information on upgrading to 7.1 see the Upgrade Guide. Jamie Sammons 2018-05-29T16:58:10Z
Categories: CMS, ECM

¿Por qué los Chatbots son esenciales para la innovación en banca?

Liferay - Mon, 05/28/2018 - 06:23

La reciente popularidad de los chatbots es bastante comprensible, considerando que llevamos hablando regularmente con máquinas desde hace un tiempo, desde el estreno de Siri en 2011. Sin embargo, mientras que casi todos los usuarios de smartphone ya han probado alguna vez hablar con un asistente de voz, muchos siguen sintiéndose avergonzados o incómodos de hacerlo en público. En 2016, el Business Insider realizó un estudio que dice que las cuatro principales aplicaciones de mensajería habían superado a las principales redes sociales en cantidad de usuarios activos. Este cambio está relacionado a la capacidad que tienen los chats de evitar las fricciones existentes en el reconocimiento de voz; un mensaje o una búsqueda puede ser incorporada y traducida con menos errores y de forma más rápida. El beneficio del chat es que puede ser inteligente a la vez que es personal y, a cada día, se vuelve más fácil - y la mayoría de las personas se siente más cómoda escribiendo que hablando. Además, si el número de veces que Siri dice que no te entiende fuese sustituido por respuestas directas y útiles en un chat, seguramente te sentirías más satisfecho.

Si bien un cambio en las tendencias indica que para muchas personas el “chatting” se ha convertido en la forma preferida de comunicación, y esto puede sugerir que los consumidores también estarán igualmente encantados de hablar con empresas y la banca de la misma manera que habla con sus amigos y familia: a través del chat. Según Gartner, el aspecto clave del comercio conversacional es que “permite a los usuarios conversar a través de la plataforma que elijan y, por ello, lleva la transparencia al siguiente nivel”. Los servicios financieros no se beneficiarán solamente de una herramienta de soporte al cliente barata, sino que van a poder sacar provecho, además, de los datos recopilados por esta herramienta.

Un escenario cambiante: ¿Asistente Virtual o Científico de Datos?

Un chatbot es un algoritmo conversacional con el que puedes interactuar a través de la interfaz de un chat. Antiguamente, los chatbots funcionaban fundamentalmente como proxies de información o asistentes virtuales que seguían una lógica basada en reglas. Ahora que algunas plataformas potentes de mensajería han abierto sus APIs a desarrolladores de terceros, es posible hacer transacciones completas a través de los chats. Por ejemplo, puedes comprar en H&M o en Sephora con Kik, o pedir la comida en Taco Bell por Slack. Ahora, incluso las FinTechs están generando dinero a través del uso de estas tecnologías. Las Pure Plays - empresas con acciones negociadas en bolsa y que dedican sus actividades a una industria o producto específico - Digit, Plum y Cleo te permiten ahorrar, presupuestar y transferir el dinero a través de una relación amigable y continua con un Chatbot directamente en Facebook Messenger. A medida que las posibilidades aumentan, también lo hacen los datos. Debido a que los chatbots están siendo desarrollados con inteligencia artificial vía el procesamiento del lenguaje natural y el machine learning, están optimizados para almacenar datos. Al fin y al cabo, ellos mismos son datos.

La diferencia entre los chatbots basados en reglas y los chatbots inteligentes son los datos que tienen por detrás. Muchos de los chatbots desarrollados en los últimos años estaban limitados a funcionar de una manera vertical, es decir, para que fuera posible recibir una contestación correcta por parte del bot, era necesario también hacer las preguntas correctas. Hasta hace poco, los datos eran capaces de reconocer los comportamientos a través de los clicks, las capturas de pantalla, el tiempo en la página y las acciones con el carrito de compras. Pero generar una conversación más humanizada con el cliente ofrece un método mucho más dinámico para comprender sus necesidades. Una relación cercana con el cliente significa la posibilidad de entender mejor sus intenciones y emociones.

Los Chatbots y la Recopilación de Datos

Si bien los chatbots son relativamente nuevos, sus beneficios son difícilmente ignorados. Una estrategia con chatbots que visa optimizar los datos que el banco ya posee tiene muchos beneficios. Los chatbots reducen los costes al eliminar y calificar las consultas de los clientes, aumentan las ventas a través de ofertas personalizadas, estimulan la fidelidad de marca en todos los canales con una voz consistente y agrega valor al enseñar educación financiera. En muchos sentidos, un chatbot es un círculo de retroalimentación positiva. Utilizando los datos para segmentar y dirigir la comunicación a su audiencia, un bot puede atraer a estos usuarios y recopilar más datos. A medida que el chatbot se vuelva más inteligente, mejora, al igual que los datos recopilados. El ciclo se repite y mejora.

Los chatbots integrados en una estrategia de atención al cliente funcionan bien a gran escala y ayudan a limitar los costes. Según un informe de 2016 de Forrester, el 73% de las personas dicen que el tiempo es lo más importante en lo que una empresa puede invertir para ofrecer un buen servicio al cliente. Sin embargo, aunque los chatbots vayan a ayudar a reducir el tráfico en los centros de atención al cliente, no los podrán sustituir completamente. Los bancos con más experiencia están utilizando los asistentes de chat como una herramienta adicional en la atención al cliente y no como un reemplazo a su personal de soporte. En lugar de esperar horas o días para que un administrador conteste a una pregunta por correo, o tener la molestia de navegar por un call center tradicional, el chatbot te puede conectar rápidamente a un agente humano a través de la misma interfaz en el caso de que no sea capaz de contestar a tu pregunta o solicitud. Esto hace que la experiencia sea más rápida y conveniente.

El Desafío y la Oportunidad del Chatbot en los Servicios Financieros

Inicialmente, el caso de uso principal del chatbot era el de atención al cliente. [24]7, una empresa de desarrollo de inteligencia artificial, estima que los chatbot deben reducir el volumen de los call centers un 35% y el tráfico de correos en hasta un 50%. Pero mucho más que reducir costes, los chatbots hacen que la atención parezca más personal. El hecho de que estos bots consigan identificar y verificar usuarios automáticamente, les hace estar un paso por delante que un servicio al cliente tradicional. Además de simplemente contestar a las preguntas, los chatbots pueden iniciar conversaciones. Recientemente, Forrester argumentó en un informe que los bancos deben aportar valores de este tipo a la vida diaria de los clientes para evitar la mercantilización.

Según el informe, “las empresas van a ofrecer interacciones personalizadas y contextualmente relevantes al combinar los perfiles de los clientes, los datos históricos sobre lo que han hecho y los datos actuales sobre lo que está sucediendo en sus vidas en el momento. Cuando los líderes de la banca digital logren lleven a cabo acciones personalizadas de la manera correcta, entonces serán capaces de ofrecer servicios digitales tan simples como los necesita cada cliente de forma individual”.

Dado que los chatbots pueden aprender, el desafío de los bancos es precisamente enseñarles a actuar de manera distinta con cada uno de los usuarios. El propietario de una pequeña empresa puede necesitar ayuda para completar solicitudes largas, firmar formularios y hacer seguimiento de los documentos financieros. Sin embargo, un jubilado va a necesitar ayuda para hacer la transferencia de fondos entre carteras y cuenta. Al responder directamente a las necesidades de un usuario a través de una interfaz conversacional, los servicios financieros pueden personalizar la experiencia y, a su vez, utilizar estos datos para desarrollar mejores productos y servicios.

Los chatbots son una herramienta que puede ayudar los bancos a ser más personales. Y muchos de los grandes bancos ya han implementado los chatbots, incluyendo Wells Fargo, Santander, Bankia, MasterCard y otros. En el otoño de 2016, el Bank of America introdujo a Erica, un chatbot que responde tanto por texto como por voz. Además, Erica también puede hablar contigo. Al estudiar tus gastos, Erica te puede informar cómo puedes pagar la cuenta de tu Visa más rápido, o ayudarte a identificar oportunidades para refinanciar un préstamo. Otro ejemplo es el nuevo chatbot del Western Union y Facebook Messenger, que te permite enviar dinero facilmente a cualquier parte del mundo a través de una tarjeta o una cuenta, sin salir del Messenger.

Los chatbots y sus resultados

A medida que los chatbots se mueven del help desk a la interfaz, su valor aumenta los ingresos y la capacidad de recopilación de datos. La mayoría de los usuarios móviles pasan su tiempo en una serie de aplicaciones en las que encuentran valor. Según el informe de Personetics, cerca de un 25% de las aplicaciones descargadas son abandonadas después de solo un uso. Los chatbots permiten identificar por qué los usuarios interactúan, cuánta información tienen ellos sobre los productos disponibles, dónde se frustran o abandonan las transacciones y si están satisfechos al final de su experiencia. Conocer esta información va a hacer que los bancos sean capaces de ajustar la experiencia del cliente para crear lo que Forrester llama “ecosistemas de valor personal”. Es decir, un conjunto productos y servicios conectados digitalmente que los individuos utilizan para satisfacer sus necesidades y deseos.

Los chatbots hacen la vida más fácil. Ya sea cuando necesites hacer una pregunta al banco o mover el dinero de una cuenta a otra, los chatbots pueden realizar este pedido de manera rápida y fácil. Más importante, los chatbots son la solución para la personalización. Tal y como sugiere Forrester, una de cada tres personas creen que todos los bancos son iguales. Con o sin los chatbots, el desafío de los bancos ahora es cuestionar esta opinión para conseguir evitar la desintermediación de las FinTechs y otros marketplaces. El resultado no será solo la conveniencia, sino clientes que reconocen que los bancos saben quienes son y qué es lo que desean. Mejor aún, no sólo los clientes, sino clientes más felices.

Da un salto en la innovación en la banca

Para más información sobre la estrategia digital en los servicios financieros, y para explorar el papel de la tecnología en la transformación digital y la omnicalidad, lee el whitepaper:

Omnicanalidad: Mucho Más Que Una Palabra De Moda Para La Banca   Maria Sanchez 2018-05-28T11:23:10Z
Categories: CMS, ECM

How to configure Liferay Developer Studio 2.2.x with Java 7 and LDS 3.x with Java 8 for Mac OS 10.13 (High Sierra)

Liferay - Thu, 05/24/2018 - 11:09
Scenario
  • You are hosting Liferay Developer Studio with Mac OS 10.13 (High Sierra)
  • You want to use Liferay Developer Studio 2.2.x with Java 7 and Liferay Developer Studio 3.x with Java 8
  • Liferay Developer Studio 2.2.x requires Java 1.7 (7)
  • Liferay Developer Studio 3.x requires Java 1.8 (8)
  • You have Java 1.7 (7) and 1.8 (8) installed on your Mac
  • Java 1.8 (8) is the default Java runtime on your Mac

NOTE: This scenario is also applicable to Mac OS 10.12 (Sierra).


Solution Outline
  • Configure Liferay Developer Studio 2.2.x to use Java 7
    NOTE: Do not configure Liferay Developer Studio 3.x as it will default to using Java 8
  • Configure console to use Java 7 for user profile (optional)
  • Configure Java 7 as the system-wide default for Mac OS (optional)

Configuration
1. Liferay Developer Studio 2.2.x Configuration

This is possibly the simplest and least intrusive approach.


1.1. Locate LDS 2.2.x app using Mac OS Finder

e.g.

/Applications/Liferay-Developer-Studio/Liferay-Developer-Studio-2.2.2-GA3/DeveloperStudio.app


1.2. Show package contents

Right click on file "DeveloperStudio.app" and select menu item "Show package contents"

e.g.

/Applications/Liferay-Developer-Studio/Liferay-Developer-Studio-2.2.2-GA3/DeveloperStudio.app/Contents/MacOS


1.3. Update LDS 2.2 app launch configuration with path to Java 1.7

Edit Developer Studio configuration file and insert "-vm" parameter with path to Java 1.7 home folder.

e.g.

FILE: /Applications/Liferay-Developer-Studio/Liferay-Developer-Studio-2.2.2-GA3/DeveloperStudio.app/Contents/MacOS/DeveloperStudio.ini

... -vm /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home ...
2. Mac OS 10.13 (High Sierra) User Profile Java Console Configuration (Optional)

This configuration is only required if you plan to use the Liferay SDK from the command line console (aka. terminal).

We will configure the command line console to use Java 7 for the active user.

If you need to use a different Liferay SDK, adjust the target Java version and restart the command line console to apply changes.


2.1/ Create or update file $HOME/.bash_profile with the following snippet.

e.g.

if [ -f ~/.bashrc ]; then source $HOME/.bashrc fi
2.2/ Create or update file $HOME/.bashrc with the following snippet

e.g.

export JAVA_HOME=`/usr/libexec/java_home -v 1.7` export PATH=$JAVA_HOME/bin:$PATH
3/ Close and reopen all console (aka. terminal) windows to apply changes
4/ Confirm command line console is using Java version 1.7

e.g.

$ java -version java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

 

3. Mac OS 10.13 (High Sierra) Global Java Configuration (Optional)

This approach may impact other applications or tools reliant upon a particular Java runtime, hence use with caution.

There are many articles on this topic of setting the global Java version.

Some approaches, using the java_home tool, are outlined in the following stack overflow article:

To identify all Java runtime folders on your Mac, you can use the /usr/libexec/java_home tool to identify all registered Java runtime environments

e.g.

$ /usr/libexec/java_home --verbose Matching Java Virtual Machines (16): 1.8.0_171, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home 1.8.0_161, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_161.jdk/Contents/Home 1.8.0_121, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home 1.8.0_111, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home 1.8.0_45, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home 1.8.0_25, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home 1.8.0_05, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home 1.7.0_79, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home 1.7.0_71, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_71.jdk/Contents/Home 1.7.0_55, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_55.jdk/Contents/Home 1.7.0_51, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home 1.7.0_21, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_21.jdk/Contents/Home 1.6.0_51-b11-457, x86_64: "Java SE 6" /Library/Java/JavaVirtualMachines/1.6.0_51-b11-457.jdk/Contents/Home 1.6.0_51-b11-457, i386: "Java SE 6" /Library/Java/JavaVirtualMachines/1.6.0_51-b11-457.jdk/Contents/Home 1.6.0_35-b10-428, x86_64: "Java SE 6" /Library/Java/JavaVirtualMachines/1.6.0_35-b10-428.jdk/Contents/Home 1.6.0_35-b10-428, i386: "Java SE 6" /Library/Java/JavaVirtualMachines/1.6.0_35-b10-428.jdk/Contents/Home
Related Articles Tim Telcik 2018-05-24T16:09:34Z
Categories: CMS, ECM

New Liferay IntelliJ Plugin Released

Liferay - Tue, 05/22/2018 - 22:45

Hello all,


Today we are pleased to announce the official release of Liferay IntelliJ Plugin. Liferay IntelliJ Plugin is a plugin for Jetbrains IntelliJ to support developing Liferay components.


For customers, they can download Liferay IntelliJ Plugin here. You may refer to the following installation steps:

  • Click on Configure > Plugins > Install plugin from disk...
  • Point to the downloaded zip file > Click on OK button > Restart

The key features for this release are:

 

  • Creating Liferay Workspaces (maven and gradle based)
  • Creating Liferay Modules (maven and gradle based)
  • Liferay Tomcat Server support for deployment, debugging,
  • Add line markers for each entity in service editor
  • Editing support for bnd.bnd files and XML
    • Support for syntax checking, highlighting and hyperlink
    • Support for auto code completion

 

Special Thanks

Thanks so much to Dominik Marks on code completion features.

 

Liferay Workspace Support
 

To create a Liferay workspace, click File > New > Project... > Select Liferay and choose your liferay workspace type.

 

Installing a Liferay Server

Liferay server is located in bundles folder under your liferay workspace. And it's defined in gradle.properties file.

 

Creating a Liferay Module and Deploy

Creating liferay modules requires an existing liferay workspace. Once you have created a new module project, select Liferay > Deploy.

 

Add line markers for each entity in service editor

 

Editing bnd.bnd and XML files(highlighting and code completions in Editor)

Here are some captured screenshots about syntax highlighting, code completion and hyperlink.




 

Feedback

If you run into any issues or have any suggestions please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2018-05-23T03:45:16Z
Categories: CMS, ECM

Liferay Portal 7.0 CE GA7 Release

Liferay - Tue, 05/22/2018 - 14:11

I'm pleased to announce the immediate availability of: Liferay Portal 7.0 CE GA7!


  Download Now! What’s New

ElasticSearch 6.0 - Liferay Portal 7.0 CE GA7 now adds support for ElasticSearch 6.x.  Download the Liferay CE Connector to Elasticsearch 6 from Marketplace.

Liferay Faces Fixes - The following fixes for Liferay Faces are included:

Bug Fixes - A complete list can be found here.

Release Nomenclature

Following Liferay's version scheme established in 2010, this release is Liferay Portal 7.0 CE GA7.  The internal version number is 7.0.6 (i.e. the seventh release of 7.0).  See below for upgrade instructions from 6.1, 6.0, and 5.x.

Downloads

You can find the 7.0 release on the usual downloads page. 

Source Code

As Liferay is an open source project, many of you will want to get at its guts. The source is available as a zip archive on the downloads page, or on its home on GitHub. Many community contributions went into this release, and hopefully many more in future releases! If you're interested in contributing, take a look at our updated contribution guide.

Compatibility Matrix

Liferay Portal 7.0 CE GA7 is testedextensively against different Open Source App Server/Database server combinations.

Application Servers:
  • Apache Tomcat 8.0 with Java 8
  • Wildfly 10.0 with Java 8
Database Servers:
  • HSQLDB 2 (only for demonstration, development, and testing)
  • MySQL 5.6
  • MariaDB 10
  • PostgreSQL 9.4
Search:
  • ElasticSearch 2.4.x
Documentation

The Liferay Documentation Team has been hard at work updating all of the documentation for the new release.  This includes updated (and vastly improved/enlarged) javadoc and related reference documentation, and and updated installation and development documentation can be found on the Liferay Developer Network. Our community has been instrumental in identifying the areas of improvement, and we are constantly updating the documentation to fill in any gaps.

Bug Reporting

If you believe you have encountered a bug in the new release you can report your issue on issues.liferay.com, selecting the "7.0.0 CE 7" release as the value for the "Affects Version/s" field.

Upgrading

The upgrade experience for Liferay 7 has been completely revamped.  There are some caveats though, so be sure to check out the Upgrade Guide on the Liferay Developer Network for more details on upgrading to 7.0.

Getting Support

Support for Liferay Portal 7.0 CE GA7 is provided by our awesome community.  Please visit our  community website for more details on how you can receive support.

Liferay and its worldwide partner network also provides services, support, training, and consulting around its flagship enterprise offering, Liferay DXP.

Also note that customers on existing releases such as 6.1 and 6.2 continue to be professionally supported, and the documentation, source, and other ancillary data about these releases will remain in place.

Kudos

Thanks to everyone in our community! It is thanks to your constant support that makes each release as great as they are!

Jamie Sammons 2018-05-22T19:11:29Z
Categories: CMS, ECM

LSNA 2018 Platinum Sponsor Interview: Alaaeldin El-Nattar, Rivet Logic

Liferay - Mon, 05/21/2018 - 12:22

In this Liferay Symposium North America interview series, we put a spotlight on our event sponsors and the ways in which they are driving digital transformation today. These interviews help put a face to the modern innovations happening at the companies in focus and give a glimpse into their latest innovations.

Alaaeldin El-Nattar is the Chief Operating Officer at Rivet Logic, helping to ensure that the organization consistently provides high-quality services. With more than 18 years of experience in enterprise IT software design and development, Alaaeldin has helped lead the architecture, implementation and deployment of many different client projects at Rivet. Today, Rivet is providing award-winning consulting, design and systems integration services that build modern digital experiences needed by companies across industries.

Liferay: What is your current position with Rivet? Can you share a brief overview of what it is you do in your work with the company?

Alaaeldin El-Nattar: I am Rivet Logic’s COO. I oversee all of the company’s practices to ensure that our clients are receiving the highest level of professional services. I also run our Managed Services operations.

L: What would you say most motivates you and what do you wish to accomplish through your position?

AE: As a company, we are committed to helping organizations excel through our thought leadership and digital experience solutions. We pride ourselves on the quality of work we provide, so it’s extremely rewarding when we see our solutions delivering real business value and making a positive impact.

We’ve also worked with Liferay since the very beginning. To see Liferay come this far and to be part of that journey every step of the way as a Platinum Partner has been a very fulfilling experience. We’ve seen Liferay evolve as a platform, and it’s exciting to see where the future roadmap leads, as well as the types of innovative solutions we can build for customers using Liferay.

L: This year at LSNA, we are focused on the next step of digital innovation. Digital transformation and the continuing advent of new technology is changing businesses around the world. What do you think companies need to do in order to successfully innovate in their technological strategy?

AE: The number of technology platforms available is increasing every day, each addressing different business challenges in its own way. There is no one size fits all solution. The key is to select modern technologies that are agile, with the ability to integrate with various other digital technologies so they work well together. Then, as a whole, businesses can address their unique challenges in the most effective and optimized way possible.

It’s also important to have some sort of feedback loop. Data is all around us, and the ability collect the right data and extract valuable insights to make continuous improvements is paramount.

But technology is only one part of the equation. Going beyond technology, a business’s overall digital strategy should be driven by the customer, not IT. It’s the right combination of people, process and technology that makes digital transformation successful.

L: As IT and business strategies continue to evolve, how do you predict everyday business will change in the coming years?

AE: The pace of change keeps increasing - businesses either need to keep up and adapt or risk failure. For this rapid change to happen, organizations need to create an environment that engages employees and fosters innovation.

Employees today demand the same type of experience in their workplace as consumers do. They want to be empowered - with easy access to content and data, user-friendly tools to perform their job, tools that facilitate collaboration and the ability to find and engage with other employees across the organization.

And with the workforce becoming more mobile and global, we’re going to see more and more businesses implement a digital workplace. We believe a modern intranet is an essential part of a digital workplace, providing a gateway for employees to access anything company related, while providing an environment that cultivates knowledge sharing and community.

L: Along with changing businesses, today’s customers have new standards regarding their online experiences with companies. How do you see organizations needing to change and adapt to new customer demands in order to stay successful?

AE: Over the past few years, user experience and customer journeys have become key. But many companies have not yet moved over to a unified digital experience. We believe Digital Experience Platforms (DXP) will continue to gain momentum as businesses work toward developing their strategies for omnichannel engagement.

Many businesses are collecting customer data to some extent. However, those who can harness their data to find meaning and in turn make data-driven decisions for better personalization will gain a competitive advantage. But it’s also important to keep in mind that it’s not about data quantity but quality, as well as the ability to make better use of the data you already have and fine-tuning data collection and analysis methods.

Lastly, businesses are going to need to pivot faster and adopt agile product and service development. In order to adjust their customer experience, companies will also need to adjust their products and services in a rapid-fire way. Rather than a traditional, iterative production work-cycle, companies will need to balance a lot of moving parts, constantly testing, improving and optimizing their solutions.

L: What is your favorite thing about Liferay Symposium North America? What should a first-time attendee make sure not to miss?

AE: Whether you’re a developer or business user, Liferay Symposium is a great way for attendees to learn and be inspired through the large variety of sessions offered across multiple tracks. It’s also a great way to connect with the rest of the Liferay community and hear how others are using Liferay to solve unique business challenges in their organizations.

The official after-party is a can’t miss event for first time LSNA attendees, or any attendee for that matter.

L: Besides being a platinum sponsor at LSNA, what else are you up to? What else might our readers be interested in that is happening at Rivet?

AE: Besides LSNA, we’ve been busy developing solutions in areas where we see a gap and that our customers can benefit from using. A few examples are Liferay and Box integration, Wealth Management Portal and Product Development Portal, to name a few.

Join Us at Liferay Symposium North America 2018

Liferay Symposium will take place from October 8-10 this year in New Orleans, LA, and registration is now open. Click the link below and register for three days of insights and support in your digital transformation.

Register for LSNA 2018 Today   Matthew Draper 2018-05-21T17:22:54Z
Categories: CMS, ECM

Understanding the Elements of Digital Experience Platforms

Liferay - Mon, 05/21/2018 - 11:45

Digital Experience Platforms (DXPs) are becoming a major part of many companies’ digital transformation strategies. This software helps companies integrate, support and update their many different systems in order to make the most of their online presence, while creating new solutions to support many audiences, such as customers and employees, on a single platform.

However, due to the relatively new nature of the term and the types of applications that fall under its umbrella, it is common for companies to wonder, what is a DXP? Through modern market needs and a wide variety of vendors responding to such demands, a consistent DXP definition is beginning to take shape today.

In order to make informed choices regarding the future of their technology selection, businesses across all industries should better understand the nature of a Digital Experience Platform. By understanding the common elements of a DXP and how they work together to form a platform, a company can be equipped to make the right decision for their unique needs. According to Gartner, DXPs combine and coordinate applications as a set of rationalized, integrated services that fall into three categories:

1. Audience Experience

A DXP must be able to provide customers, partners and employees with the ability to interact with various capabilities. Whether this is from target audiences navigating through sites, portals and applications on the front-end or employees parsing through information on the back end, providing a comprehensive but easy to use experience is crucial for a DXP. Elements of these audience experiences include:

  • Content Interaction: Audiences who use solutions built on a DXP should have personalized access to important information, services and applications, as well as the potential ability to rate and share the content they have discovered.
  • Search, Navigation and Discovery: Digital experiences built on a DXP should allow audiences to discover the information and services they need thanks to the use of dynamic navigation and search functions that leverage multiple search engines and results based on personalization.
  • Collaboration: DXPs should strengthen internal company communication by aggregating important employee information and allowing for collaboration on documents, calendars, projects and more for better knowledge management.
  • End-User Customization: Audiences using a DXP should be able to manage and personalize their own experiences to some degree. Depending on company regulations, this can include notifications, saved searches, subscriptions, dashboard and website layouts and more.

Together, these elements allow an organization the ability to uniquely tailor all aspects of the digital experience for individual audience members and empower workforces by helping them quickly find the information they need.

2. DXP Management

A DXP enables a business to administer, create and improve many different aspects of their digital experience. By providing greater control over the many elements that make up a company’s online presence, as well as how these many pieces work together, a business can fine-tune customer experiences and adapt to the changing needs of both target audiences and employees. Elements of DXP management include:

  • Content Management: Web content management capabilities allow users to create, organize and publish different types of content for websites, mobile applications, portals and more online solutions so that a company can effectively control content and assets.
  • Integration and Aggregation: Administrators can aggregate various applications and integrate software with third-party systems for robust services that better leverage the data created by users and collected by the business.
  • Personalization: Adapt online content in websites, portals and more to suit an individual user’s past behavior and preferences, which can be found through analyzing the audience member’s shared data.
  • Analytics and Optimization: Integration of third-party analytics data or creation of analytics solutions within the platform help monitor performance and can be used to improve assets for more effective digital experiences.
  • Security Administration: System security is a crucial element of modern digital business that can be supported by DXP tools including identity management, single sign-on, document access management and more user rights control.
  • Workflow/Business Process Management: A DXP can support the workflow of content approval and publishing, as well as workflows for forms and other business processes for greater control over daily work.
  • User Experience: Business users can control webpage layout and content in order to control the elements that comprise customer journeys, for better targeting within marketing efforts.
  • Digital Commerce: Commerce software can be integrated with or built on a DXP so businesses can manage transactions, shipping orders, shopping baskets and more, should online selling be part of their business strategy.

Because a DXP is composed of so many different elements, this level of control means that users will be able to effectively manage them all, both as individual pieces and as part of an integrated whole.

3. Platform/Architecture

There is a technical foundation of every DXP upon which the many applications that compose it are built. By building new tools with a DXP, as well as connecting pre-existing applications through the platform, a business can have greater control over customer and employee data, as well as how smoothly a user can shift from one tool to another for a more seamless experience. The architecture of a digital experience platform includes:

  • Presentation: DXPs support UI technologies that deliver rich experiences, including page framework, containers, component models and widgets or a similar construct. These elements, along with responsive web design and progressive web application development, help DXP users craft a digital presence that unique suits their company.
  • Customer/User Data Management: DXPs can incorporate a user profile as a single trusted view of the "customer" or individual user, which collects, unifies and synchronizes customer data from digital and analog channels to improve customer experiences.
  • Cloud Enablement: Support deployment via third-party infrastructure-as-a-service providers, allowing DXP services to run in a cloud-based environment at a platform level with multitenancy.
  • Mobility: Develop mobile applications, including notification support, offline support, mobile software development kit (SDK), voice interaction and more through a mobile application development platform.
  • Globalization/Localization/Multilanguage Support: A DXP can support multiple character sets, translation and localization, which can be automatically applied to the correct users by applying user data and history in order to shape preferences.

Together, these features allow an organization control over how their systems are interconnected and the ways in which information is shared, improving user insights and creating frictionless audience experiences.

Supporting Your Business with a Digital Experience Platform

The ability of a digital experience platform to support a wide variety of needs means it can be leveraged by companies across all industries to meet various goals. In the age of digital transformation, companies will require a system that is equally focused on and able to manage both front-end user experiences and back-end systems, as discussed by ORM London. A strong DXP will give organizations the ability to not only strengthen both sides, but use the data and capabilities of both front- and back-end systems to improve the other.

Learn More About What a DXP Can Do for You

Understanding the elements of a DXP can help you embrace digital transformation in meaningful ways for your organization. Learn more about how a platform can support your company in our whitepaper.

Read “Digital Experience Platforms: Designed for Digital Transformation”   Matthew Draper 2018-05-21T16:45:20Z
Categories: CMS, ECM

How Does Digital Transformation Counteract Slowdown in the Insurance Industry

Liferay - Mon, 05/21/2018 - 11:14

The insurance industry worldwide is in the midst of digital transformation, with insurance providers attempting to apply new technology and modern ways of doing business to an industry that is both heavily regulated by governments and often reluctant to change ways of doing business that have been in place for years. As such, many institutions within the industry may experience slowdown in their adoption and application of new technologies.

The result is a need for technology that helps support and empower insurance companies to be as fast as possible in how they can adopt new solutions and meet the everyday needs of customers. In addition, this technology must be dependable and highly secure to protect the vital data used within everyday business. However, in order to embrace change, insurance providers must better understand both what is slowing down the digital transformation process in their industry and what modern technology can do to quickly and efficiently embrace change.

Why Does the Insurance Industry Need Digital Transformation?

As discussed by McKinsey, the insurance industry is one of the last to undergo large-scale digital transformation as its companies have been largely insulated against potential disruption. This is due to government regulation, the complexity of the services offered, large capital requirements for being a major member of the industry, complex business networks and a wide array of policies that need to be reviewed during every procedure. Together, these elements have long prevented upstart companies from entering the industry and from fast adoption of new technology suddenly changing the way business is commonly done.

However, the continuing advancements made via digital transformation in other industries have now come to the insurance industry. As companies around the world create modern digital experiences and allow customers to shape online services around their individual needs, customers within the insurance industry begin to expect a similar level of service that they experience elsewhere.

The biggest cause of digital transformation in the industry is that insurance is based on customer self-service, which is supported by highly trained employees. Today, modern customers expect digital services that match the precedent set by companies like Amazon. As such, it is important for the modern insurance provider to give audiences a wide array of effective tools built for many different life circumstances, but also a customer experience that flows along easily and helps empower individuals to care for their unique needs.

Along with satisfied customers, the impact of digital transformation on insurance customer self-service can also be seen in finances. According to Forbes, process automation enabled by insurance digital transformation can result in up to a 65% reduction in costs, which is helping to propel transformation in the industry.

How Modern Technology is Speeding Up Insurance

Today, insurance providers around the world wonder what role insurance technology will play in either helping or hindering their standing in the industry. However, the many new insurtech applications being created are meant to enable, not undermine. But in doing so, they can be seen as a disruptor in the industry.

A well-run company should be able to execute in any industry by acquiring and retaining customers. This means that any company looking to embrace digital transformation will require effective tools and a trained team to perform well no matter the industry. The effects of digital transformation within the insurance industry are not the new cause of giant disruption, just the evolution of and continued growth within technology used throughout the industry. Companies who seek to grow and meet modern customer expectations will use these new tools to empower their workforces and long-term business strategies in ways that were not possible beforehand.

Today, brokers are beginning to gain access to data and tools that big insurance companies have had for much longer thanks to the increased accessibility of such solutions. Brokers are attempting to leverage analytics to determine the best and longest-lasting acquisitions for substantial financial gains. However, the vast amount of data being generated by customers and prospects needs to be managed and utilized through predictive analytics in order to create personalized insurance package services, which both individuals and organizations commonly expect in the modern age of insurance.

As a whole, modern insurance companies have made great strides in analytics and have begun applying omnichannel experiences to their online presence. However, most are very early in applying deep customer insights to the data they have collected to create more effective customer experiences.

These new challenges and opportunities mean that insurance providers of all sizes can innovate in exciting, meaningful ways that empower employees and better reach target audiences. Together, they make digital transformation in the insurance industry not just a priority, but a meaningful next step for companies around the world.

What is the State of Insurance Digital Transformation Today?

To help insurance companies better understand their industry today, Liferay conducted a global survey of providers regarding the state of digital transformation. Learn their answers and more insights into insurance digital transformation in our whitepaper.

Read “Insurance in the Digital Age”   Matthew Draper 2018-05-21T16:14:11Z
Categories: CMS, ECM

Four Major Mistakes in Predictive Marketing and How to Avoid Them

Liferay - Mon, 05/21/2018 - 11:00

From dynamic web content to targeted email campaigns to mobile app push notifications, predictive marketing can take on many forms. However, they are all defined by anticipating future needs and interests based on customer data and history in order to shape online experiences for a higher degree of marketing success.

The following four predictive marketing mistakes can frequently occur and negatively impact the campaigns of companies across all industries, as well as the resulting opinions of potential customers. However, solutions are possible so long as businesses are aware of these mistakes, their causes and the ways in which to both prevent and correct errors.

1. Poorly Structuring a Predictive Model

Pulling a large amount of detailed data can give a business the chance to understand the history and interests of individual customers to predict their future needs. However, accessing massive amounts of vital data is only half of the equation when creating a successful predictive marketing campaign. Without a properly researched and structured predictive model, a company may be unable to provide the level of detail and nuanced experiences needed for success. Without proper planning, models may even provide incorrect experiences and product offers.

Solution: As discussed by Target Marketing Mag, successful predictive models prioritize detailed integration of customer data long before they begin applying these insights to the experiences they create. This includes collecting and comparing data from multiple sources so that a single customer view can be created, which not only creates a robust view of individuals, but eliminates errors, such as misspelled names or incorrect assumptions regarding interests. When combined with a predictive score that determines a customer’s level of interest in both product and the business as a whole, a predictive model can have a much lower chance of making critical errors.

2. Starting with a High Profile Project

Every predictive marketing campaign has to start somewhere and after investing time and resources into the systems that enable these targeted campaigns, it may be tempting to kick things off with a high profile, large audience project. However, the cost of error becomes higher the larger a predictive marketing campaign becomes, and it may take some time to smooth out any shortcomings that may affect a campaign.

As highlighted by Computer World, companies had frequently claimed that their predictive models will “revolutionize” their respective industries and, as such, launch high profile campaigns at launch in an effort to wow their target audiences. However, the projects may lead to well-publicized errors and simply be too large and complex to maintain over time, leading to businesses shutting down the projects that they claimed would propel them to the top of the industry.

Solution: Create small, realistic goals for projects that can make a difference in profits but will not have a major negative impact in the case of mistakes. Once successful, build additional, larger projects informed by past successes and failures.

3. Over-Targeting the Customer

Every predictive marketing campaign must walk a fine line between providing an individualized experience and allowing audiences to feel independent in their customer journeys. When a prospect has been over-targeted, he or she may become numb to your company’s marketing efforts or, even worse, begin to resent your business. This includes a constant stream of emails, continually reflecting their shopping history in all online experiences, addressing them by name everywhere and continually shifting website layouts in an attempt to match their interests.

While these experiences can help a customer feel understood by a company, they can also burn them out if they feel continually tracked while online. As customers become warier regarding how companies use their data, an over-abundance of obviously targeted experiences may cause them to seek out competitors.

Solution: A successful predictive marketing campaign should lay out a friendly path before the targeted customer, providing him or her with the products most likely to align with their interests for ease of access and customer support that quickly meets their needs. Consider less obviously targeted experiences and fewer direct communications so that customers feel more at ease with your company handling their data.

4. Leveraging Inappropriate Data

In the modern age of data gathering, many businesses have access to social media, email, shopping history and much more willingly provided by target audiences. However, the trouble begins when a company decides to leverage as much information as they can without discernment regarding whether or not they should put such information to use.

Often, these issues come down to the generic application of big data without crucial segmentation regarding marketing efforts. The result can be wedding congratulations to unmarried targets and baby product advertisements to prospects who haven’t actually announced their pregnancy yet. As discussed by Umbel, it may be tempting for businesses to collect all the data that they can, but the result is often a messy blend of crucial information and bits of insight that should never be used in marketing material.

Solution: When creating parameters for data collection, create clear objectives. What do you plan on using the data for within your marketing campaigns? In addition, what types of data will never be necessary within your predictive marketing? When combined with a system that segments this data as needed, companies can avoid serious mistakes.

Retaining and Engaging Your Customers

Retaining and engaging existing customers for a strong, loyal audience is a major factor in the long-term success of every business. Read more about how to create a return customer strategy that works for you in our whitepaper.

Read “Engage Existing Customers: Four Key Strategies”   Matthew Draper 2018-05-21T16:00:48Z
Categories: CMS, ECM

Liferay Mobile Strategy - ALL the details you need!

Liferay - Sat, 05/19/2018 - 09:28

You want to know more about Liferay Mobile strategies? Check my new video series about Liferay Mobile. Let me know if you have any question.

Video 1 https://youtu.be/R48gQ6bnQpo

Video 2 https://youtu.be/lAT_XjEx8Do

Video 3 - Part 1 https://youtu.be/NOLtn4Y3W9E

Video 3 - Part 2 https://youtu.be/UsDTSF2aMPw

Video 4 https://youtu.be/H0ObNpXRAHU

Video 5 https://youtu.be/VIeyeAz2CXQ

Video 6 https://youtu.be/7bOlD4Xj2E8

 

For the script of the demos in those videos, you can download it from here: https://drive.google.com/drive/folders/12WsD0at0qPXdNFcnNudy6oIdGWwBnMu3

 

Fady Hakim 2018-05-19T14:28:24Z
Categories: CMS, ECM

Liferay Portal 7.1 Beta 1 Release

Liferay - Fri, 05/18/2018 - 16:21
I'm pleased to announce the immediate availability of: Liferay Portal 7.1 Beta 1
 
  Download Now: New in Beta 1 Modern Site Building - Display Pages: Users can now map Basic Web Content articles to editable fields in fragments by creating a Display Page. Users can then select this display page when creating web content articles. Hitting the friendlyURL of the article will render the article on a page using the selected display page.   Fixed in Beta 1

New Features Summary

Modern Site Building: Liferay 7.1 introduces a new way of adding content.  Fragments allows a content author to create content in small reusable pieces.  Fragments can be edited in real time or can be exported and managed with the tooling of your choice.  Use page templates from within a site and have complete control over the layout of your content pages.  Navigation menus now give you complete control over site navigation.  Create site navigation in new and interesting ways and have full control over the navigations visual presentation.        Forms Experience: Liferay 7.1 includes a completely revamped forms experience.  Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form.  Using the new Element Sets, form creators can now create groups of reusable components.  Forms fields can now be translated into any language using any Liferay locale and can also be easily duplicated. 
    Redesigned System Settings: System Settings has received a complete overhaul.  Configurations have been logically grouped together making it easier than every before to find what's configurable.  Several options that were located on Server Administration have also been moved to System Settings.     User Administration: User account from has been completely redesigned.  Each form section can now be saved independently of each other minimizing the chance of losing changes.  The new ScreensNavigationEntry let's developers add any form they want to user administration.     Improvements to Blogs and Forums:  Blog readers a can now un-subscribe to notifications via email. Friendly URLs used to be generated based on the entries title. Authors now have complete control over the friendly URL of the entry.   Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.     Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch.  Message boards users can now attach as many files as they want by dragging and dropping them in a post.  Message boards also has had many visual updates.     Workflow Improvements: Workflow has received a complete UI overhaul.  All workflow configuration is now consolidated under one area in the Control Panel.  Workflow definitions are now versioned and previous versions can now be restored.  Workflow definitions can now be saved in draft form and published live when they are ready.     Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 and the inclusion of Tomcat 9.0.  At the time of this release  JDK 8 is still the only supported JDK.  
Documentation Documentation for Liferay 7.1 is well underway.  Many sections have already been completed in the Deployment and Development Sections.  For information on upgrading to 7.1 see the Upgrade Guide. Jamie Sammons 2018-05-18T21:21:31Z
Categories: CMS, ECM

web-portal-design

Liferay - Tue, 05/15/2018 - 15:04
O Design de Portais Web Está Evoluindo

O design de portais web já percorreu um longo caminho desde os tempos de gateways estáticos (como AOL ou Yahoo). A experiência front-end de portais pode ser customizada e refinada, mas o que realmente destaca os portais é sua infraestrutura. Um portal moderno e bem projetado pode conectar centenas de sistemas legados diferentes e transformá-los em uma única solução de negócios simplificada que se adapta à sua empresa.

Esses cinco elementos fundamentais de design de portais web abordam algumas das principais razões pelas quais as empresas decidem criar portais, como permitir uma colaboração mais unificada ou a integração de sistemas. Ao incorporar estes principais componentes, você pode encontrar ganhos imediatos em seu projeto de portal e garantir que você esteja usando a tecnologia da melhor maneira possível.

#1 SSO para Múltiplos Sites

Durante a transição do físico para o digital, diversas soluções de apenas um ponto de contado surgiram nas empresas. Varejistas online, por exemplo, ainda fazem você entrar em um site para ver seus pedidos e outro para revisar seus pontos de recompensa. Ou uma  empresa pode solicitar que seus funcionários realizem login em um site para registrar seu quadro de horários e outro para solicitar reembolsos.

Portais podem conectar múltiplos sites com Single Sign On, removendo a inconveniência de logar em diferentes sites várias vezes. Portais modernos também devem ser configurados com OAuth nativo, que permite aos usuários realizarem login em seu portal através de Facebook, Google ou outra plataforma social.

Utilização na prática:

-     A Hawlett Packard tem um portal global para parceiros que atende mais de 650.000 usuários em 174 países. Cada país costumava ter seu próprio site, mas ao aproveitar a funcionalidade de SSO de uma plataforma de portal, eles conseguiram ter uma URL única e global para enviar para seus parceiros, que precisavam realizar apenas 1 login para acessar tudo que precisavam. O tempo economizado ao manter apenas um site deve ser mais do que suficiente para mostrar a utilidade deste elemento, especialmente para empresas que contam com um histórico de múltiplos sites.

#2 Localização e Distribuição da Comunicação

With the increasing amount of global offices and customer bases, companies need to have a way to manage the translation, localization and distribution of their content. The best portals have sufficient WCM capabilities to create multiple localizations of the same page, allowing administrators to focus on crafting and publishing their communications.

Com a crescente quantidade de escritórios globais e da base de clientes, as empresas precisam ter uma maneira de gerenciar a tradução, personalização e distribuição do seu conteúdo. Os melhores portais contem com funcionalidades WCM suficientes para criar múltiplas localizações da mesma página, permitindo que os administradores de página foquem na elaboração e publicação das suas comunicações.

Utilização na prática:

-     A Domino’s tem uma intranet com sites de suas lojas, franquias e escritórios. O time de comunicação interna envia alertas diferentes e novidades para cada audiência, e como todos os três sites estão no mesmo portal, a equipe pode gerenciar tudo em um só lugar. Isto economiza um tempo valioso que iria ser gasto entre os diferentes sites, buscando documentos e recriando conteúdo em diferentes repositórios. No front end, os usuários apenas veem o conteúdo que é relevante e personalizado para eles.

#3 Componentes Reutilizáveis

Portals should be quick to build across web and mobile. This can mean duplicating standard functionality across multiple sites, such as when an organization needs to create department sites for each of its teams. It can also mean reusing web components when building mobile applications. For organizations that manage a suite of mobile apps, ensuring that your portal enables this reusability from the beginning will put you in a position to innovate faster later.

Portais podem ser construídos rapidamente para a web e móvel. Isto pode ser feito através da duplicação de funcionalidades padrão em diversos sites, quando uma empresa precisa criar sites de departamento para cada time, por exemplo. Também pode significar a reutilização de componentes web ao desenvolver aplicativos móveis. Para empresas que gerenciam uma pacote de aplicativos móveis, assegurar que seu portal permita a reutilização desde o começo pode levar a uma maior inovação.

Utilização na prática:

-     Telx contava com uma necessidade muito específica para seus clientes com um aplicativo móvel nativo que pudesse gerenciar o acesso ao seu data center. Como esta funcionalidade já tinha sido desenvolvida em seu portal, a empresa conseguiu implementar em seu aplicativo móvel sem precisar reescrever nenhum código. Esta habilidade de reutilizar componentes chave através de todos os seus pontos de contato digitais é de extremo valor, e mantém sua experiência consistente com pouco esforço adicional.

#4 Gerenciamento de Workflow

Além do gerenciamento de documento baseado na função de cada usuário, portais permitem que usuários finais sem conhecimento técnico transfiram seus workflows usuais para uma plataforma de colaboração. Workflows avançados podem ser configurados de acordo com a função de cada um, de maneira que os usuários possam enviar suas atividades para seu supervisores. Isto se torna especialmente importantes quando funcionários estão lidando com informações confidenciais de seus clientes e precisam ter certeza que elas não serão perdidas.

Utilização na prática:

-     A Advanced Energy tinha vários estágios de comunicações de pedidos de compra para gerenciar. Ao mover este processo para uma plataforma de portal centralizadora, a organização parou de passar informações por fax, email e telefone. Ao invés disso, a utilização de funções e workflows automatizados, permitiu que seu time tivesse a certeza que as informações estão sendo enviadas diretamente para a pessoa correta, sem nenhum risco de perder o registro delas.

#5 Colaboração

Contar com funcionalidades de colaboração promove o engajamento na maioria dos tipos de portais. Isto é especialmente benéfico para portais de gestão do conhecimento. Às vezes, as equipes usam email como forma de comunicação para um projeto, que rapidamente espalha as informações em várias conversas que não podem ser localizadas ou gerenciadas. Quando você move estes tipos de processos para a mesma plataforma que você gerencia sua biblioteca de documentos, você ajuda seus funcionários a trabalharem de maneira mais eficiente em projetos que incluem membros de outras equipes da empresa. Isto também pode ser positivo em portais para parceiros, por exemplo, onde você trabalha com usuários de diferentes regiões que precisam compartilhar informações sobre seus trabalhos.

Utilização na prática:

-     CitiXsys reconstruiu seu portal do conhecimento para facilitar o compartilhamento de informações para seus usuários e parceiros. O novo sistema reúne uma variedade de serviços, desde módulos de vídeos de treinamentos a fóruns de discussões que são uma maneira de ajudar os funcionários. A colaboração que resulta dos fóruns de discussão intensificou o impacto do portal, e a empresa agora conta com funcionários 3 vezes mais engajados.

Um Bom Design Vai Além do Look & Feel

O design visual e a usabilidade são fatores importantes no design de portais web, mas uma interface de usuário (UI) construída com funcionalidades básicas não irá ajudar as empresas a atingirem o ROI que precisam dos seus portais. Soluções de portais são projetados para desenvolver ferramentas de trabalho digitais personalizadas que ajudam os usuários a realizarem suas atividades, sendo um administrador de sistema realizando a manutenção dos papéis de usuários ou um paciente tentando agendar sua próxima consulta.

  Isabella Rocha 2018-05-15T20:04:59Z
Categories: CMS, ECM
Syndicate content