AdoptOS

Assistance with Open Source adoption

Open Source News

Transformando os Serviços Públicos Através da Experiência do Cliente

Liferay - Wed, 06/06/2018 - 15:59

Serviços digitais são uma maneira de demonstrar que órgãos públicos podem prover um serviço de qualidade e ao mesmo tempo abrir caminhos para futuras interações com os cidadãos. Apenas 46% dos órgãos federais e 26% dos estaduais brasileiros oferecem a realização de agendamentos para consultas, atendimentos e serviços através de websites, por exemplo. A presença online deve aproximar as pessoas de todos os serviços e informações disponibilizados nessa esfera.

Como a Tecnologia Transforma e Conecta as Experiências do Cliente A tecnologia permite que Governo e Cidadão estejam cada vez mais próximos, principalmente no mundo online. Com isso, surgem uma série de preocupações visando a prestação de um melhor serviço ao público e usuários internos, além do funcionamento dos sistemas dentro das instituições. Atualmente, menos da metade dos órgãos federais e estaduais e menos de 25% das prefeituras contam com portais responsivos. Algumas ações já estão sendo feitas pelo governo brasileiro com o intuito de oferecer uma experiência do cliente relevante e melhor. Mas, como é possível prover uma melhor experiência ao cidadão cada vez mais conectado, de maneira prática e rápida?

Apresentamos a seguir 3 maneiras que a tecnologia pode transformar a maneira que o governo entrega experiências:

1. Automação e Autoatendimento

Em primeiro lugar, uma experiência do cliente de qualidade concede liberdade para os cidadãos guiarem sua própria jornada. Ao invés de ligar ou visitar fisicamente uma agência, uma poderosa plataforma de experiência digital concede ao cidadão a autonomia para encontrar informações, realizar pagamentos e interagir com o time atendimento ao cliente de forma mais eficiente para ambas as partes.

2. Transformação de Processos

Adotar uma estratégia digital também atinge processos antigos. Com um CMS completo apoiando sua plataforma de experiência digital, as instituições públicas podem abandonar as diversas atividades e registros realizados apenas com papel ofício. Silos entre departamentos também podem ser superados com a melhoria do gerenciamento de documentos, calendários, fluxos de trabalho e ferramentas de colaboração.

3. Redução de Custos

Automação, autoatendimento e escritórios sem papel são excelentes opções de economia de tempo e dinheiro. Automatizar a jornada do cliente significa alocar sua equipe em outras atividades e diminuir os investimentos feitos para o gerenciamento de dados.

Os Recursos de Tecnologia Necessários

Sabemos por que serviços públicos e instituições governamentais precisam se adaptar à nova realidade digital, mas isso leva à uma pergunta: que tipo de tecnologia pode tornar possível todos os benefícios citados acima? Para ser considerada uma plataforma de experiência digital (DXP), sua plataforma deverá contar com um completo sistema de gerenciamento de conteúdo (CMS), gerenciamento de documento e workflows, além de contar com funcionalidades para o desenvolvimento de aplicativos móveis. Mas esses são apenas os aspectos iniciais. Para a verdadeira transformação de órgãos governamentais tradicionais em hubs de Governo as a Service (GaaS), aqui estão 3 importantes atributos tecnológicos para levar em consideração ao escolher sua próxima plataforma de experiência digital:

1. Suporte Omnichannel

A plataforma de experiência digital ideal é capaz de prover comunicação e transações omnichannel. Isso significa que seus clientes poderão acessar portais do governo através de desktops, notebooks, smartphones, tablets ou qualquer outro dispositivo.

2. Funcionalidades de Personalização

Quando os clientes de fato interagem com os sistemas do governo - não importa o canal - eles precisam ser atendidos de forma personalizada. Para tornar isso possível, o DXP deverá armazenar nome, detalhes e preferências de cada cliente para entregar uma jornada do cliente satisfatória e que seja personalizada na medida certa.

3. Integração

Para de fato conectar as experiências da sua audiência, a tecnologia DXP deve integrar-se com sistemas legados e qualquer canal e aplicativos novos que aparecerão nos próximos anos. A arquitetura de microserviços ajuda a tornar isso possível, além de proteger-se do uso de tecnologias que possam deixar a desejar no futuro. Essencialmente, a arquitetura de microserviços implica no desenvolvimento de softwares como um conjunto de serviços com deploy independente, pequenos, com serviços modulares onde cada serviço conta com um processo único e que se comunicam através de um mecanismo leve e bem definido para atender aos objetivos de negócio.

Caso de Sucesso: IplanRio - Carioca Digital

 

A IplanRio é a empresa responsável por prover serviços de Tecnologia da Informação e Comunicação (TIC) aos órgãos municipais do Rio de Janeiro. Além disso, deve atender as necessidades de uma população superior a 6 milhões de habitantes. A Prefeitura do Rio de Janeiro uniu os poderes estaduais e municipais com o intuito de desburocratizar e modernizar alguns serviços online, até então disponíveis apenas em agências físicas, através do portal Carioca Digital. 


Entre os diversos serviços prontamente disponibilizados pelo Carioca Digital, estão a visualização do prontuário eletrônico de saúde, informações de imóveis (IPTU), a agenda cultural da cidade, o saldo de créditos de Nota Fiscal Carioca e abrir chamados para atendimento no 1746, Central de Atendimento ao Cidadão. Além de alguns serviços do Detran/RJ também disponíveis online através da plataforma, antes apenas possíveis em agências físicas. “A ferramenta traz, de forma customizada, o acesso a informações ao usuário cadastrado, a partir da inscrição do CPF e da indicação da geolocalização. É uma forma da prefeitura dialogar constantemente com o cidadão.” Fernando Ivo, Assessor IplanRio. Como resultado, houve um aumento significativo na satisfação dos usuários e o número de acessos mensais aos dados do Carioca Digital subiu 30%.
 

Você pode encontrar mais informações sobre o Carioca Digital aqui.

A Nova Era de Engajamento com o Governo

Para estar em condições de enfrentar um futuro omnichannel, as instituições públicas devem primeiro adotar a realidade da estratégia digital como a única estratégia - e, em seguida, colocar em prática as ações em direção à tecnologia que possam suportar o peso dessa era digital nova e em rápida evolução.

   

Deseja entregar experiências de cliente mais poderosas, personalizadas e econômicas para cidadãos e empresas locais? Confira o whitepaper:

Quatro Estratégias para Transformar a Experiência do Seu Cliente.   Luciano Demery 2018-06-06T20:59:26Z
Categories: CMS, ECM

Why data pipeline patterns are SnapLogic’s best kept secret

SnapLogic - Tue, 06/05/2018 - 18:36

Let’s face it, for IT developers, building data integrations can be daunting and time-consuming. And it doesn’t help when the pressure to deliver only increases when business stakeholders continuously ask, “When will it be done?” or “Why is my project taking so long?” What non-engineers may not realize is that building and maintaining hand-coded integrations[...] Read the full article here.

The post Why data pipeline patterns are SnapLogic’s best kept secret appeared first on SnapLogic.

Categories: ETL

Why we need a new liferay-npm-bundler (1 of 3)

Liferay - Tue, 06/05/2018 - 10:12
What is the problem with bundler 1.x

This is the first of a three articles series motivating and explaining the enhancements we have done to Liferay's npm bundler. You can learn more about it in its first release blog post.

How bundler 1.x works

As you all know, the bundler lets you package your JS files and npm packages inside Liferay OSGi bundles so that they can be used from portlets. The key feature is that you may use a standard npm development workflow and it will work out of the box without any need for complex deployments or setups.

To make its magic, the bundler grabs all your npm dependencies, puts them inside your OSGi bundle and transforms them as needed to be run inside portlets. Among these transformations, one of the most important is converting from CommonJS to AMD, because Liferay uses an AMD compliant loader to manage JS modules.

Once your OSGi bundle is deployed, every time a user visits one of your pages, the loader gets information about all deployed npm packages, resolves your dependency tree and loads all needed JS modules.

For example: say you have a portlet named my-portlet that once started loads a JS module called my-portlet/js/main and ultimately, that module depends on isarray npm package to do its job. That would lead to a project containing these files (among others):

package.json
    {
        "name": "my-portlet",
        "version": "1.0.0",
        "dependencies": {
            "isarray": "^1.0.0"
        }
    }
node_modules/isarray
    package.json
        {
           "name": "isarray",
           "version": "1.0.1",
           "main": "index.js"
        }
META-INF/resources
    view.jsp
        <aui:script require="my-portlet@1.0.0/js/main"/>
    js
        main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });

Whenever you hit my-portlet's view page the loader looks for the my-portlet@1.0.0/js/main JS module and loads it. That causes main.js to be executed and when the require call is executed (note that it is the AMD require, not the CommonJS one) the loader gets information from the server, which has scanned package.json, to determine the version number of isarray package and find a suitable version among all those deployed. In this case, if only your bundle is deployed to the portal, main.js will get isarray@1.0.1, which is the version bundled inside your OSGi JAR file.

What if we deploy two portlets with shared dependencies

Now imagine that one of your colleagues creates a portlet named his-portlet which is identical to my-portlet, but because he developed it later, it bundles isarray at version 1.2.0 instead of 1.0.1. That would lead to a project containing these files (among others):

package.json
    {
        "name": "his-portlet",
        "version": "1.0.0",
        "dependencies": {
            "isarray": "^1.0.0"
        }
    }
node_modules/isarray
    package.json
        {
            "name": "isarray",
            "version": "1.2.0",
            "main": "index.js"
        }
META-INF/resources
    view.jsp
        <aui:script require="his-portlet@1.0.0/js/main"/>
    js
        main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });

In this case, whenever you hit his-portlet's view page the loader looks as before for the his-portlet@1.0.0/js/main JS module and loads it. Then the require call is executed and the loader finds a suitable version. But now something has changed because we have two versions of isarray deployed in the server:

  • 1.0.1 bundled inside my-portlet
  • 1.2.0 bundled inside his-portlet

So, which one is the loader giving to his-portlet@1.0.0/js/main? As we said, it gives the best suitable version among all deployed. That means the newest version satisfying the semantic version constraints declared in package.json. And, for the case of his-porlet that is version 1.2.0 because it satisfies semantic version constraint ^1.0.0.

Looks like everything is working like with my-porlet, doesn't it? Well, not really. Let's look at my-portlet again, now that we have two versions of isarray. In my-portlet's package.json file the semantic version constraint for isarray is ^1.0.0 too, so, what will it get?

Of course: version 1.2.0 of isarray. That is because 1.2.0 better satisfies ^1.0.0 than 1.0.1 and, in fact, it's similar to what npm would do if you rerun npm install in my-portlet as it will find a newer version in http://npmjs.com and will update it.

Also, this strategy will lead to deduplication of the isarray package and if both my-portlet and his-portlet are placed in the same page, only one copy of isarray will be loaded in the JS interpreter.

But that's perfect! What's the problem, then?

Although this looks quite nice, it has some drawbacks. One is already seen in the example: the developer of my-portlet was using isarray@1.0.1 in his local development copy when he bundled it. That means that all tests were done with that version. But then, because a colleague deployed another portlet with an updated isarray his bundle changed and decided to use a different version which, even if it is declared semantically compatible, may lead to unexpected behaviours.

Not only that, but the fact that version 1.0.1 or 1.2.0 is loaded for my-portlet is not decided in any way by the developer of my-portlet and changes depending on what is being deployed in the server.

Those drawbacks are very easy to spot, but if we look in depth, we can find two subtler problems that may lead to unstabilities and hard to diagnose bugs.

Transitive dependencies shake

Because the bundler 1.x solution decides to perform aggressive semantic version resolution, the dependencies graph of any project may lead to unexpected results depending on how semantic version constraints are declared. This is specially important for what I call framework packages, as opposed to library packages. This is not a formal definition, but I refer to framework packages when using npm packages that are supposed to call the project's code, while library packages are supposed to be called from the project's code.

When using library packages, a switch of version is not so bad, because it usually leads to using a newer (and thus more stable) version. That's the case of the isarray example above.

But when using frameworks, you usually have a bunch of packages that are supposed to cooperate together and be in specific versions. That, of course, depends on how the framework is structured and may not hold true for all of them, but it is definitely easier to have problems in a dependency graph where some subset of packages are tightly coupled than in one where every package is isolated and doesn't worry too much about the others.

Let's see an example of what I'm referring to: imagine you have a project using the Non Existing Wonderful UI Components framework (let's call it WUI). That framework is composed of 3 packages:

  1. component-core
  2. button
  3. textfield

Packages 2 and 3 depend on 1. And suppose that package 1 has a class called Widget from which Button (in package 2) and TextField (in package 3) extend. This is a usual widget based UI pattern, you get the idea. Now, let's suppose that Widget has this check somewhere in its code:

Widget.sayHelloIfYouAreAWidget = function(widget) {     if (widget instanceof Widget) {         console.log('Hey, I am a widget, that is wonderful!');     } };

The function tests if some object is extending from Widget by looking at its prototype and says something if it holds true.

Now, say that we have two portlet projects again: my-portlet and his-portlet (not the ones we were using above, but two new portlet projects that use WUI) and their dependencies are set like this:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ textfield 1.2.0 ➥ component-core 1.2.0 his-portlet@1.0.0 ➥ button 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0

In addition, the dependencies of button and textfield are set like this:

button@1.0.0 ➥ component-core ^1.0.0 button@1.5.0 ➥ component-core ^1.0.0 textfield@1.2.0 ➥ component-core ^1.0.0 textfield@1.5.0 ➥ component-core ^1.0.0

If the two portlets are created at different times, depending on what is available at http://npmjs.com, you may get the following versions after npm install:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ component-core 1.2.0 ➥ textfield 1.2.0 ➥ component-core 1.2.0 ➥ component-core 1.2.0 his-portlet@1.0.0 ➥ button 1.5.0 ➥ component-core 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0 ➥ component-core 1.5.0

This assumes that the latest version of component-core available when npm install was run in my-portlet was 1.2.0, but then it was updated and by the time that his-portlet ran npm install the latest version was 1.5.0.

What happens when we deploy my-portlet and his-portlet?

Because the platform will do aggressive deduplication you will get the following dependency graphs:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ component-core 1.5.0 (✨ note that it gets 1.5.0 because `his-portlet` is providing it) ➥ textfield 1.2.0 ➥ component-core 1.5.0 (✨ note that it gets 1.5.0 because `his-portlet` is providing it) ➥ component-core 1.2.0 (✨ note that the project gets 1.2.0 because it explicitly asked for it) his-portlet@1.0.0 ➥ button 1.5.0 ➥ component-core 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0 ➥ component-core 1.5.0

We are almost there. Now imagine that both my-portlet and his-portlet do this:

var btn = new Button(); Widget.sayHelloIfYouAreAWidget(btn);

Will it work as expected in both portlets? As you may have guessed, the answer is no. It will definitely work in his-portlet but in the case of my-portlet the call to Widget.sayHelloIfYouAreAWidget won't print anything because the instanceof check will be testing a Button that extends from Widget at component-core@1.5.0 against Widget at component-core@1.2.0 (because the project code is using that version, not 1.5.0) and thus will fail.

I know this is a fairly complicated (and maybe unstable) setup that can ultimately be fixed by tweaking the framework dependencies or changing the code, but it is definitely a possible one. Not only that, but there's no way for a developer to know what is happening until he deploys the portlets and, even if a certain combination of portlets works now, it could fail after if a new portlet is deployed.

On the contrary, in a scenario where the developer was using a standard bundler like webpack or Browserify the final build would be predictable for both portlets and would work as expected, each one loading its own dependencies. The drawback would be that with standard bundlers there's no way to deduplicate and share dependencies between them.

Diverted peer dependencies

Let's see another case where the bundler 1.x cannot satisfy the build expectations. This time it's with peer dependencies. We will again use two projects named my-portlet and his-portlet with the following dependencies:

my-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-helper 1.0.0 his-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-helper 1.2.0

At the same time, we know that a-library has a peer dependency on helper ^1.0.0. That is:

a-library@1.0.0 ➥ [peer] a-helper ^1.0.0

So, in both projects, the peer dependency is satisfied, as both a-helper 1.0.0 (in my-portlet) and 1.2.0 (in his-portlet) satisfy a-library's semantic version constraint ^1.0.0.

But now, what happens when we deploy both portlets to the server? Because the platform aggressively deduplicates, there will only be one a-library package in the system making it impossible that it depends on a-helper 1.0.0 and 1.2.0 at the same time. So the most rational decision -probably- is to make it depend on a-helper 1.2.0.

That looks OK for this case as we are satisfying semantic version constraints correctly, but we are again changing the build at runtime without any control on the developer side and that can lead to unexpected results.

However, there's a subtler scenario where the bundler doesn't know how to satisfy peer dependencies and it's when peer dependencies appear in a transitive path.

So, for example, say that we have these dependencies:

my-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-sub-helper 2.0.0 a-library@1.0.0 ➥ [peer] a-helper >=1.0.0 a-helper@1.0.0 ➥ a-sub-helper 1.0.0

Now suppose that a-library requires a-sub-helper in one of its modules. In this case, when run in Node.js, a-library receives a-sub-helper at version 2.0.0, not 1.0.0. That's because it doesn't matter that a-library peerdepends on a-helper to resolve a-sub-helper, but a-sub-helper is simply resolved from the root of the project because a-library is not declaring it as a dependency, but just relying on a-helper to provide it.

But this cannot be reproduced in Liferay, because it needs to know the semantic version constraints of each dependency package as it doesn't have any node_modules where to look up for packages. We could fix it by injecting an extra dependency in a-library's package.json to a-sub-helper 2.0.0 but that would work for this project, not for all projects deployed in the server. That is because, as we saw in the previous deployment in this same section, there's only one a-library package for everybody, but at the same time we can have several projects where a-sub-helper resolves to a different version when required from a-library.

In fact, we used this technique for Angular peer dependencies by means of the liferay-npm-bundler-plugin-inject-angular-dependencies plugin and it used to work if you only deployed one version of Angular. But things became muddier if you deployed several ones.

For these reasons, we needed a better model where the whole solution could be grown in the future. And that need led to bundler 2.x where we have -hopefully- created a solid foundation for future development of the bundler.

If you liked this, head on to the next article where we explain how bundler 2.x addresses these issues.

Ivan Zaera 2018-06-05T15:12:26Z
Categories: CMS, ECM

Why we need a new liferay-npm-bundler (1 of 3)

Liferay - Tue, 06/05/2018 - 09:56
What is the problem with bundler 1.x

This is the first of a three articles series motivating and explaining the enhancements we have done to Liferay's npm bundler. You can learn more about it in its first release blog post.

How bundler 1.x works

As you all know, the bundler lets you package your JS files and npm packages inside Liferay OSGi bundles so that they can be used from portlets. The key feature is that you may use a standard npm development workflow and it will work out of the box without any need for complex deployments or setups.

To make its magic, the bundler grabs all your npm dependencies, puts them inside your OSGi bundle and transforms them as needed to be run inside portlets. Among these transformations, one of the most important is converting from CommonJS to AMD, because Liferay uses an AMD compliant loader to manage JS modules.

Once your OSGi bundle is deployed, every time a user visits one of your pages, the loader gets information about all deployed npm packages, resolves your dependency tree and loads all needed JS modules.

For example: say you have a portlet named my-portlet that once started loads a JS module called my-portlet/js/main and ultimately, that module depends on isarray npm package to do its job. That would lead to a project containing these files (among others):

package.json
    {
        "name": "my-portlet",
        "version": "1.0.0",
        "dependencies": {
            "isarray": "^1.0.0"
        }
    }
node_modules/isarray
    package.json
        {
           "name": "isarray",
           "version": "1.0.1",
           "main": "index.js"
        }
META-INF/resources
    view.jsp
        <aui:script require="my-portlet@1.0.0/js/main"/>
    js
        main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });

Whenever you hit my-portlet's view page the loader looks for the my-portlet@1.0.0/js/main JS module and loads it. That causes main.js to be executed and when the require call is executed (note that it is the AMD require, not the CommonJS one) the loader gets information from the server, which has scanned package.json, to determine the version number of isarray package and find a suitable version among all those deployed. In this case, if only your bundle is deployed to the portal, main.js will get isarray@1.0.1, which is the version bundled inside your OSGi JAR file.

What if we deploy two portlets with shared dependencies

Now imagine that one of your colleagues creates a portlet named his-portlet which is identical to my-portlet, but because he developed it later, it bundles isarray at version 1.2.0 instead of 1.0.1. That would lead to a project containing these files (among others):

package.json
    {
        "name": "his-portlet",
        "version": "1.0.0",
        "dependencies": {
            "isarray": "^1.0.0"
        }
    }
node_modules/isarray
    package.json
        {
            "name": "isarray",
            "version": "1.2.0",
            "main": "index.js"
        }
META-INF/resources
    view.jsp
        <aui:script require="his-portlet@1.0.0/js/main"/>
    js
        main.js
            require('isarray', function(isarray) {
                console.log(isarray([]));
            });

In this case, whenever you hit his-portlet's view page the loader looks as before for the his-portlet@1.0.0/js/main JS module and loads it. Then the require call is executed and the loader finds a suitable version. But now something has changed because we have two versions of isarray deployed in the server:

  • 1.0.1 bundled inside my-portlet
  • 1.2.0 bundled inside his-portlet

So, which one is the loader giving to his-portlet@1.0.0/js/main? As we said, it gives the best suitable version among all deployed. That means the newest version satisfying the semantic version constraints declared in package.json. And, for the case of his-porlet that is version 1.2.0 because it satisfies semantic version constraint ^1.0.0.

Looks like everything is working like with my-porlet, doesn't it? Well, not really. Let's look at my-portlet again, now that we have two versions of isarray. In my-portlet's package.json file the semantic version constraint for isarray is ^1.0.0 too, so, what will it get?

Of course: version 1.2.0 of isarray. That is because 1.2.0 better satisfies ^1.0.0 than 1.0.1 and, in fact, it's similar to what npm would do if you rerun npm install in my-portlet as it will find a newer version in http://npmjs.com and will update it.

Also, this strategy will lead to deduplication of the isarray package and if both my-portlet and his-portlet are placed in the same page, only one copy of isarray will be loaded in the JS interpreter.

But that's perfect! What's the problem, then?

Although this looks quite nice, it has some drawbacks. One is already seen in the example: the developer of my-portlet was using isarray@1.0.1 in his local development copy when he bundled it. That means that all tests were done with that version. But then, because a colleague deployed another portlet with an updated isarray his bundle changed and decided to use a different version which, even if it is declared semantically compatible, may lead to unexpected behaviours.

Not only that, but the fact that version 1.0.1 or 1.2.0 is loaded for my-portlet is not decided in any way by the developer of my-portlet and changes depending on what is being deployed in the server.

Those drawbacks are very easy to spot, but if we look in depth, we can find two subtler problems that may lead to unstabilities and hard to diagnose bugs.

Transitive dependencies shake

Because the bundler 1.x solution decides to perform aggressive semantic version resolution, the dependencies graph of any project may lead to unexpected results depending on how semantic version constraints are declared. This is specially important for what I call framework packages, as opposed to library packages. This is not a formal definition, but I refer to framework packages when using npm packages that are supposed to call the project's code, while library packages are supposed to be called from the project's code.

When using library packages, a switch of version is not so bad, because it usually leads to using a newer (and thus more stable) version. That's the case of the isarray example above.

But when using frameworks, you usually have a bunch of packages that are supposed to cooperate together and be in specific versions. That, of course, depends on how the framework is structured and may not hold true for all of them, but it is definitely easier to have problems in a dependency graph where some subset of packages are tightly coupled than in one where every package is isolated and doesn't worry too much about the others.

Let's see an example of what I'm referring to: imagine you have a project using the Non Existing Wonderful UI Components framework (let's call it WUI). That framework is composed of 3 packages:

  1. component-core
  2. button
  3. textfield

Packages 2 and 3 depend on 1. And suppose that package 1 has a class called Widget from which Button (in package 2) and TextField (in package 3) extend. This is a usual widget based UI pattern, you get the idea. Now, let's suppose that Widget has this check somewhere in its code:

Widget.sayHelloIfYouAreAWidget = function(widget) {     if (widget instanceof Widget) {         console.log('Hey, I am a widget, that is wonderful!');     } };

The function tests if some object is extending from Widget by looking at its prototype and says something if it holds true.

Now, say that we have two portlet projects again: my-portlet and his-portlet (not the ones we were using above, but two new portlet projects that use WUI) and their dependencies are set like this:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ textfield 1.2.0 ➥ component-core 1.2.0 his-portlet@1.0.0 ➥ button 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0

In addition, the dependencies of button and textfield are set like this:

button@1.0.0 ➥ component-core ^1.0.0 button@1.5.0 ➥ component-core ^1.0.0 textfield@1.2.0 ➥ component-core ^1.0.0 textfield@1.5.0 ➥ component-core ^1.0.0

If the two portlets are created at different times, depending on what is available at http://npmjs.com, you may get the following versions after npm install:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ component-core 1.2.0 ➥ textfield 1.2.0 ➥ component-core 1.2.0 ➥ component-core 1.2.0 his-portlet@1.0.0 ➥ button 1.5.0 ➥ component-core 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0 ➥ component-core 1.5.0

This assumes that the latest version of component-core available when npm install was run in my-portlet was 1.2.0, but then it was updated and by the time that his-portlet ran npm install the latest version was 1.5.0.

What happens when we deploy my-portlet and his-portlet?

Because the platform will do aggressive deduplication you will get the following dependency graphs:

my-portlet@1.0.0 ➥ button 1.0.0 ➥ component-core 1.5.0 (✨ note that it gets 1.5.0 because `his-portlet` is providing it) ➥ textfield 1.2.0 ➥ component-core 1.5.0 (✨ note that it gets 1.5.0 because `his-portlet` is providing it) ➥ component-core 1.2.0 (✨ note that the project gets 1.2.0 because it explicitly asked for it) his-portlet@1.0.0 ➥ button 1.5.0 ➥ component-core 1.5.0 ➥ textfield 1.5.0 ➥ component-core 1.5.0 ➥ component-core 1.5.0

We are almost there. Now imagine that both my-portlet and his-portlet do this:

var btn = new Button(); Widget.sayHelloIfYouAreAWidget(btn);

Will it work as expected in both portlets? As you may have guessed, the answer is no. It will definitely work in his-portlet but in the case of my-portlet the call to Widget.sayHelloIfYouAreAWidget won't print anything because the instanceof check will be testing a Button that extends from Widget at component-core@1.5.0 against Widget at component-core@1.2.0 (because the project code is using that version, not 1.5.0) and thus will fail.

I know this is a fairly complicated (and maybe unstable) setup that can ultimately be fixed by tweaking the framework dependencies or changing the code, but it is definitely a possible one. Not only that, but there's no way for a developer to know what is happening until he deploys the portlets and, even if a certain combination of portlets works now, it could fail after if a new portlet is deployed.

On the contrary, in a scenario where the developer was using a standard bundler like webpack or Browserify the final build would be predictable for both portlets and would work as expected, each one loading its own dependencies. The drawback would be that with standard bundlers there's no way to deduplicate and share dependencies between them.

Diverted peer dependencies

Let's see another case where the bundler 1.x cannot satisfy the build expectations. This time it's with peer dependencies. We will again use two projects named my-portlet and his-portlet with the following dependencies:

my-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-helper 1.0.0 his-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-helper 1.2.0

At the same time, we know that a-library has a peer dependency on helper ^1.0.0. That is:

a-library@1.0.0 ➥ [peer] a-helper ^1.0.0

So, in both projects, the peer dependency is satisfied, as both a-helper 1.0.0 (in my-portlet) and 1.2.0 (in his-portlet) satisfy a-library's semantic version constraint ^1.0.0.

But now, what happens when we deploy both portlets to the server? Because the platform aggressively deduplicates, there will only be one a-library package in the system making it impossible that it depends on a-helper 1.0.0 and 1.2.0 at the same time. So the most rational decision -probably- is to make it depend on a-helper 1.2.0.

That looks OK for this case as we are satisfying semantic version constraints correctly, but we are again changing the build at runtime without any control on the developer side and that can lead to unexpected results.

However, there's a subtler scenario where the bundler doesn't know how to satisfy peer dependencies and it's when peer dependencies appear in a transitive path.

So, for example, say that we have these dependencies:

my-portlet@1.0.0 ➥ a-library 1.0.0 ➥ a-sub-helper 2.0.0 a-library@1.0.0 ➥ [peer] a-helper >=1.0.0 a-helper@1.0.0 ➥ a-sub-helper 1.0.0

Now suppose that a-library requires a-sub-helper in one of its modules. In this case, when run in Node.js, a-library receives a-sub-helper at version 2.0.0, not 1.0.0. That's because it doesn't matter that a-library peerdepends on a-helper to resolve a-sub-helper, but a-sub-helper is simply resolved from the root of the project because a-library is not declaring it as a dependency, but just relying on a-helper to provide it.

But this cannot be reproduced in Liferay, because it needs to know the semantic version constraints of each dependency package as it doesn't have any node_modules where to look up for packages. We could fix it by injecting an extra dependency in a-library's package.json to a-sub-helper 2.0.0 but that would work for this project, not for all projects deployed in the server. That is because, as we saw in the previous deployment in this same section, there's only one a-library package for everybody, but at the same time we can have several projects where a-sub-helper resolves to a different version when required from a-library.

In fact, we used this technique for Angular peer dependencies by means of the liferay-npm-bundler-plugin-inject-angular-dependencies plugin and it used to work if you only deployed one version of Angular. But things became muddier if you deployed several ones.

For these reasons, we needed a better model where the whole solution could be grown in the future. And that need led to bundler 2.x where we have -hopefully- created a solid foundation for future development of the bundler.

If you liked this, head on to the next article where we explain how bundler 2.x addresses these issues.

Ivan Zaera 2018-06-05T14:56:00Z
Categories: CMS, ECM

Enthusiasm and Community-Building at SF Bay Area CiviCamp 2018

CiviCRM - Mon, 06/04/2018 - 11:00

A tapestry of attendee comments…

Categories: CRM

3 reasons on why to visit PrestaShop at IRCE in Chicago, USA

PrestaShop - Mon, 06/04/2018 - 08:21
PrestaShop will be present at IRCE 2018 exhibition, from 5th to 8th of June at the Mc Cormick Place.
Categories: E-commerce

Meet Hussein, Ambassador of the month | May 2018

PrestaShop - Fri, 06/01/2018 - 09:05
About you Three words to best describe you: Ambitious, professional and fan of challenges.
Categories: E-commerce

Do your own Analytics in Liferay with Elastic Search and Kibana

Liferay - Thu, 05/31/2018 - 18:17

Liferay integrates out of the box with Piwik and Google Analytics.

However, doing analytics with ElasticSearch, Logstash and Kibana, is not much harder:


https://youtu.be/os5gqpnC0GA

How to do it?
Easy

First, we need to get the data:

Offloading work to the user browsers using Ajax seems to be the most logical way to do it (imagine hundreds of concurrent users clicking on things and moving their mouse at the same time), to not affect our server's performance. That's how Piwik, Google Analytics, and Omniture work.

Something similar to this (https://raw.githubusercontent.com/roclas/analytics-storage-server/logstash/javascript_data_collection/hover_and_clicks.js) could do the job.

It would also make sense to not give the burden of receiving these Ajax requests to our Liferay server. Our application server is a bit heavy, and something smaller (and more scalable) would do a better job for this simple task.


What about a pool of threads that work asynchronously?

This project (https://github.com/roclas/analytics-storage-server/tree/filesystem) basically is that. A pool of asynchronous threads that act as an HTTP server. It is scalable and fast. It could get all our Ajax events and store them in files (so that they can be later analyzed in batch using machine learning, big data, etc).

 

What about visualization? Where is the data analysis here?

 

This is where the second part of our problem starts; we are able to collect the data, but now we have to analyze it and show graphs and pie charts.

In this other branch (https://github.com/roclas/analytics-storage-server/tree/logstash) what the server is doing is piping all the events into Logstash, which will insert them into Elastic Search.


Once our data is in Elastic Search, we just have to point Kibana to our Index and start creating dashboards and playing with charts.

Only if you are interested in the details, this video shows how everything works more in detail (it is also probably a bit too long and boring): https://youtu.be/NMPWR2vdnio 

 

Carlos Hernandez 2018-05-31T23:17:02Z
Categories: CMS, ECM

Do your own Analytics in Liferay with Elastic Search and Kibana

Liferay - Thu, 05/31/2018 - 18:16

Liferay integrates out of the box with Piwik and Google Analytics.

However, doing analytics with ElasticSearch, Logstash and Kibana, is not much harder:


https://youtu.be/os5gqpnC0GA

How to do it?
Easy

First, we need to get the data:

Offloading work to the user browsers using Ajax seems to be the most logical way to do it (imagine hundreds of concurrent users clicking on things and moving their mouse at the same time), to not affect our server's performance. That's how Piwik, Google Analytics, and Omniture work.

Something similar to this (https://raw.githubusercontent.com/roclas/analytics-storage-server/logstash/javascript_data_collection/hover_and_clicks.js) could do the job.

It would also make sense to not give the burden of receiving these Ajax requests to our Liferay server. Our application server is a bit heavy, and something smaller (and more scalable) would do a better job for this simple task.


What about a pool of threads that work asynchronously?

This project (https://github.com/roclas/analytics-storage-server/tree/filesystem) basically is that. A pool of asynchronous threads that act as an HTTP server. It is scalable and fast. It could get all our Ajax events and store them in files (so that they can be later analyzed in batch using machine learning, big data, etc).

 

What about visualization? Where is the data analysis here?

 

This is where the second part of our problem starts; we are able to collect the data, but now we have to analyze it and show graphs and pie charts.

In this other branch (https://github.com/roclas/analytics-storage-server/tree/logstash) what the server is doing is piping all the events into Logstash, which will insert them into Elastic Search.


Once our data is in Elastic Search, we just have to point Kibana to our Index and start creating dashboards and playing with charts.

Only if you are interested in the details, this video shows how everything works more in detail (it is also probably a bit too long and boring): https://youtu.be/NMPWR2vdnio 

 

Carlos Hernandez 2018-05-31T23:16:00Z
Categories: CMS, ECM

Who Needs Calculus? Not High-Schoolers. Are computer classes better?

SnapLogic - Thu, 05/31/2018 - 17:05

Previously published in The Wall Street Journal. Thousands of American high-school students on Tuesday will take the Advanced Placement calculus exam. Many are probably dreading it, perhaps seeing the test as an attempt to show off skills they will never use. What if they’re right? I started thinking about this recently when my 14-year-old daughter was doing[...] Read the full article here.

The post Who Needs Calculus? Not High-Schoolers. Are computer classes better? appeared first on SnapLogic.

Categories: ETL

Continuous Reporting: Has the time come to jettison quarterly reports?

SnapLogic - Wed, 05/30/2018 - 15:40

The time has come to say goodbye to quarterly reporting and quarterly earnings guidance. And not just for the usual reasons cited like short-termism. In this day and age of “real-time everything,” a quarterly reporting cadence is antiquated, pointless and unacceptable. Public companies seeking money from personal and institutional investors are giving quarterly reports short[...] Read the full article here.

The post Continuous Reporting: Has the time come to jettison quarterly reports? appeared first on SnapLogic.

Categories: ETL

Improved Geocoding in CiviCRM

CiviCRM - Wed, 05/30/2018 - 15:27

A new geocoder extension released by the Wikimedia Foundation addresses some of the geocoding issues experienced by CiviCRM users.

Categories: CRM

What is Dropshipping ? PrestaShop Modules for Dropshipping

PrestaShop - Wed, 05/30/2018 - 07:15
The search for profitable business and the boom in online sales have made Dropshipping a sales model that is currently gaining more and more
Categories: E-commerce

Drupal views - CiviCRM Contact Distance Search - with a map!

CiviCRM - Wed, 05/30/2018 - 05:46

Drupal module - CiviCRM Contact Distance Search

MillerTech released this Drupal module back in 2015 but have recently updated with new features (map and use your location) and to make it more configurable.

This module offers a fully configurable/extendable Drupal view that provides the functionality to search from a postcode and a distance.

Use case scenario – Find schools from my postcode within a 5 mile radius.

Categories: CRM

Solving contemporary API challenges: Powered by SnapLogic and Apigee

SnapLogic - Tue, 05/29/2018 - 14:37

As enterprises embark on their digital transformation journey, only an API-centric platform can truly help deliver the best results while enhancing the “Integration Experience” for both the citizen integrator and the developer community. API creation is one of the many tangible benefits obtained from the SnapLogic Enterprise Integration Cloud platform. With our May 2018 release,[...] Read the full article here.

The post Solving contemporary API challenges: Powered by SnapLogic and Apigee appeared first on SnapLogic.

Categories: ETL

Open Source Principles Give the Workplace Soul

Open Source Initiative - Tue, 05/29/2018 - 13:26

As part of the 20th anniversary of open source software and the Open Source Initiative , the OSI is reaching out to our community of individual and affiliate members, sponsors, current and past board directors, and supporters to share their success stories. We want to hear from those who’ve succeeded in, and with, open source software, development, and communities. This time, we're hearing from OSI Premium Sponsor Cumulus Networks.

Why did you choose open source? How did you build your team? What issues did you overcome? Where did you find support? What are the benefits you've realized?

We hope that by sharing these stories of success from you peers and colleagues, we can help those just beginning their journey with open source software, those just joining the open source community. We'll post these articles here, and then add them to OpenSource.net for archiving and future reference. -- Thank you to all of those who are sharing their stories here and again contributing—in another way—to the success of open source.

I started thinking about what exactly makes my employer, Cumulus Networks, the way it is. And the conclusion I’ve arrived at is that the principles of open source that make our technology so innovative and forward-thinking also extend to the workplace. An open development environment translates to an open working environment, and the beliefs of the open source community translate into positive influences in the workspace. Anyone who’s spent time at our office, in our bootcamps, with our people, etc. can feel that Cumulus Networks has “soul.” We’ve got a passion, substance, life and feeling that pulses throughout our space like a funky bass line.

My former employers are pretty varied, ranging from a restaurant to a university, but all of those jobs had one thing in common — they had no soul. To these Pink Floyd-ian businesses, employees were just cogs in the machine meant to forfeit passion for profit. Each day I dragged myself out of bed, drudged to work and slogged through my day. And I’m sure everyone reading this has experienced at least one of these soulless jobs in their life. Specifically, in the world of networking, I hear engineers talk about how they “did their time” at proprietary businesses as though they had served time in prison.

If you work with or follow Cumulus at all, you probably know all about our initiative to bring S.O.U.L (Simple, Open, Untethered Linux) into networking (for the uninitiated, feel free to check out our S.O.U.L page to learn all about the movement!). You could argue that S.O.U.L is what gives us “soul.” With that in mind, I’m not exaggerating when I say that, from the moment I started my first day at Cumulus Networks, I felt like a weight was lifted off my chest. Stepping into the business of open source software and open networking felt like walking into an open field, where I was encouraged to explore, collaborate and create. Never before in my career have I felt so supported and encouraged to think outside the box. And I’m not the only person who feels this way — Cumulus Networks was ranked as one of the best places to work for the Glassdoor 2017 awards!

So how exactly does that happen? Let’s break down some of the principles of open source development and communities, and how we’ve incorporated those through S.O.U.L to show how they change the office environment.

Simple solutions from complex minds: We believe that networking shouldn’t be complicated. That’s why we promote open source solutions like Network Command Line Utility (NCLU), automation and Zero Touch Provisioning (ZTP). Our amazing engineers make it look easy, but trust us, a lot of hard work goes into making life easier for network operators. So what does this look like in the office? It looks like groups of employees openly communicating with each other to find the best solution possible. It looks like efficiency in all departments. And, if you want a real-life example, it looks like a couple engineers putting their heads together and figuring out how to reconfigure the coffee grinder so the switch only needs to be pressed once, expediting the caffeination creation process. Now that’s zero touch provisioning!

Open hardware, open environment: Cumulus Networks is all about white box and open hardware. So, it makes sense that our office “hardware” is equally open and customizable. Forget traditional, closed-off cubicles; they have no place here. Instead, the space is open and full of large, adjustable desks with no dividers. If I need to talk with engineering, I don’t have to navigate a maze of tiny, grey prisons to find someone to talk to. I simply walk (or roll my chair) through the open, sunlit office. Plus, our higher-ups aren’t sectioned off in big, private offices. If I have a question for the CTO, all I have to do is turn around and ask (he sits right behind me, and sometimes he lets me pick what music we listen to).

Untethered creativity = limitless possibilities: Nobody here is just a cog in the corporate machine. Cumulus is founded on the idea that we’re driving forward, not keeping up. The limits associated with proprietary networking don’t hold back your network, and they certainly don’t get in the way of our engineering team’s ingenuity. It’s how we’re able to create amazing products for our customers like Cumulus NetQ, and contribute innovations like ONIE to the Open Compute Project. There’s nothing quite like working at a company where you can see creativity and innovation in action.

Linux — a language everyone can speak: Linux provides interoperability throughout the stack, which is why it’s often referred to as the language of the data center. And that’s how easy it is to communicate among teams at the Cumulus office. From sales to engineering, no matter what team you belong to, we care about every step in the operation. It’s all about complete interoperability — no bottlenecks here. We take the time to understand what’s going on in all parts of the company to keep things running smoothly. Here’s an example: you may think that a member of the marketing team wouldn’t know CI/CD from AC/DC, but did you know we’ve all taken Cumulus Linux training courses so we could seamlessly communicate with engineering? That’s right, we even configured switches using Cumulus VX! Everyone at Cumulus cares about Linux networking, but we also care about helping each other out and keeping communication as open as our network.

Cumulus Networks is dedicated to thinking outside of the box so we can innovate what’s inside of the white box, and the open source culture that ideology fosters is what gives us soul. It’s like the lyric from that song by The Killers: “I’ve got soul, but I’m not a soldier!” We’re not being marched around and having orders barked at us. Working here, working in the open, and working with open source means being an individual, and that’s the beauty of having both S.O.U.L and soul.

By Madison Emery, Marketing Intern,
Cumulus Networks, OSI Premium Sponsor

“Open Source Principles Give the Workplace Soul” by Madison Emery, copyright 2018, originally appeared on the “Better Networking, Cumulus Networks Blog at, https://cumulusnetworks.com/blog/netdevoped-working-at-cumulus/ and is used and adapted with permission.

Categories: Open Source

Liferay Portal 7.1 Beta 2 Release

Liferay - Tue, 05/29/2018 - 11:58
I'm pleased to announce the immediate availability of: Liferay Portal 7.1 Beta 2
 
  Download Now: Fixed in Beta 2

New Features Summary

Modern Site Building: Liferay 7.1 introduces a new way of adding content.  Fragments allows a content author to create content in small reusable pieces.  Fragments can be edited in real time or can be exported and managed with the tooling of your choice.  Use page templates from within a site and have complete control over the layout of your content pages.  Navigation menus now give you complete control over site navigation.  Create site navigation in new and interesting ways and have full control over the navigations visual presentation.        Forms Experience: Liferay 7.1 includes a completely revamped forms experience.  Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form.  Using the new Element Sets, form creators can now create groups of reusable components.  Forms fields can now be translated into any language using any Liferay locale and can also be easily duplicated. 
    Redesigned System Settings: System Settings has received a complete overhaul.  Configurations have been logically grouped together making it easier than every before to find what's configurable.  Several options that were located on Server Administration have also been moved to System Settings.     User Administration: User account from has been completely redesigned.  Each form section can now be saved independently of each other minimizing the chance of losing changes.  The new ScreensNavigationEntry let's developers add any form they want to user administration.     Improvements to Blogs and Forums:  Blog readers a can now un-subscribe to notifications via email. Friendly URLs used to be generated based on the entries title. Authors now have complete control over the friendly URL of the entry.   Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.     Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch.  Message boards users can now attach as many files as they want by dragging and dropping them in a post.  Message boards also has had many visual updates.     Workflow Improvements: Workflow has received a complete UI overhaul.  All workflow configuration is now consolidated under one area in the Control Panel.  Workflow definitions are now versioned and previous versions can now be restored.  Workflow definitions can now be saved in draft form and published live when they are ready.     Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 and the inclusion of Tomcat 9.0.  At the time of this release  JDK 8 is still the only supported JDK.  
Documentation Documentation for Liferay 7.1 is well underway.  Many sections have already been completed in the Deployment and Development Sections.  For information on upgrading to 7.1 see the Upgrade Guide. Jamie Sammons 2018-05-29T16:58:10Z
Categories: CMS, ECM

Liferay Portal 7.1 Beta 2 Release

Liferay - Tue, 05/29/2018 - 11:56
I'm pleased to announce the immediate availability of: Liferay Portal 7.1 Beta 2
 
  Download Now: Fixed in Beta 2

New Features Summary

Modern Site Building: Liferay 7.1 introduces a new way of adding content.  Fragments allows a content author to create content in small reusable pieces.  Fragments can be edited in real time or can be exported and managed with the tooling of your choice.  Use page templates from within a site and have complete control over the layout of your content pages.  Navigation menus now give you complete control over site navigation.  Create site navigation in new and interesting ways and have full control over the navigations visual presentation.        Forms Experience: Liferay 7.1 includes a completely revamped forms experience.  Forms can now have complex grid layouts, numeric fields and file uploads. They now include new personalization rules that let you customize the default behavior of the form.  Using the new Element Sets, form creators can now create groups of reusable components.  Forms fields can now be translated into any language using any Liferay locale and can also be easily duplicated. 
    Redesigned System Settings: System Settings has received a complete overhaul.  Configurations have been logically grouped together making it easier than every before to find what's configurable.  Several options that were located on Server Administration have also been moved to System Settings.     User Administration: User account from has been completely redesigned.  Each form section can now be saved independently of each other minimizing the chance of losing changes.  The new ScreensNavigationEntry let's developers add any form they want to user administration.     Improvements to Blogs and Forums:  Blog readers a can now un-subscribe to notifications via email. Friendly URLs used to be generated based on the entries title. Authors now have complete control over the friendly URL of the entry.   Estimated reading time can be enabled in System Settings and will be calculated based on time taken to write an entry.     Blogs also have a new cards ADT that can be selected from the application configuration.  Videos can now be added inline while writing a new entry from popular services such as: Youtube, Vimeo, Facebook Video, and Twitch.  Message boards users can now attach as many files as they want by dragging and dropping them in a post.  Message boards also has had many visual updates.     Workflow Improvements: Workflow has received a complete UI overhaul.  All workflow configuration is now consolidated under one area in the Control Panel.  Workflow definitions are now versioned and previous versions can now be restored.  Workflow definitions can now be saved in draft form and published live when they are ready.     Infrastructure: Many improvements have been incorporated at the core platform level, including ElasticSearch 6.0 and the inclusion of Tomcat 9.0.  At the time of this release  JDK 8 is still the only supported JDK.  
Documentation Documentation for Liferay 7.1 is well underway.  Many sections have already been completed in the Deployment and Development Sections.  For information on upgrading to 7.1 see the Upgrade Guide. Jamie Sammons 2018-05-29T16:56:00Z
Categories: CMS, ECM

The EAN/UPC Barcode and Marketplaces

PrestaShop - Mon, 05/28/2018 - 09:20
Amazon Marketplace has always imposed EAN/UPC barcodes as universal product coding for all its flow of traders.
Categories: E-commerce

The EAN/UPC Barcode and Marketplaces

PrestaShop - Mon, 05/28/2018 - 09:20
Amazon Marketplace has always imposed EAN/UPC barcodes as universal product coding for all its flow of traders.
Categories: E-commerce
Syndicate content