AdoptOS

Assistance with Open Source adoption

Open Source News

New Liferay Project SDK Installers 3.3.1 GA2 Release

Liferay - Tue, 09/18/2018 - 02:33

We are pleased to announce the second general available release of Liferay Project SDK Installers.

 

New Installers

The installation problems are fixed in the new release, and a few bug fixes.

 

Download

For customers, they can download all of them on the customer studio download page.

 

Upgrade From previous 3 . x

  1. Download updatesite here

  2. Go to Help > Install New Software… > Add…

  3. Select Archive...Browse to the downloaded updatesite

  4. Click OK to close Add repository dialog

  5. Select all features to upgrade then click > Next, again click > Next and accept the license agreements

  6. Finish and restart to complete the upgrade

Feedback

If you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2018-09-18T07:33:00Z
Categories: CMS, ECM

Liferay Faces Released With Liferay Portal 7.1 Support!

Liferay - Mon, 09/17/2018 - 11:35
Liferay Faces Released With Liferay Portal 7.1 Support!

Liferay Faces Portal 3.0.3, Bridge Impl 4.1.2, and Bridge Ext 5.0.3 have been released with support for Liferay Portal 7.1! The release also includes several SPA/SennaJS and PrimeFaces fixes! The release is compatible with Liferay Portal 7.0 and 7.1. Go to liferayfaces.org to get the latest dependency configurations and archetype generate commands.

Liferay Faces Bridge Impl 4.1.2 Release Notes Highlights
  • [FACES-3333] - Add Liferay Portal 7.1 Support
  • [FACES-3327] - PrimeFaces exporter components (p:fileDownload, p:dataExporter, and pe:exporter) cause next navigation to fail

Full Release Notes

Liferay Faces Bridge Ext 5.0.3 Release Notes Highlights
  • [FACES-3333] - Add Liferay Portal 7.1 Support
  • [FACES-3175] - Navigating to page with the same resources but different portlet mode, window state, portlet id, or path via SPA causes certain components to fail
  • [FACES-3328] - PrimeFaces markup remains after SPA navigation in Liferay 7.0+

Full Release Notes

Liferay Faces Portal 3.0.3 Release Notes Highlights
  • [FACES-3339] - portal:inputRichText fails to rerender on Chrome (Liferay Portal 7.0 GA7)

Full Release Notes

Archetypes

Along with these updates, all of our JSF 2.2 compatible archetypes have been updated to the latest appropriate Liferay Faces artifacts and Mojarra 2.2.18.

Known Issues
  • [FACES-3340] - portal:inputRichText fails to render on Ajax request if render="false" on initial request
  • [FACES-3347] - Alloy components log warnings on rerender in Liferay Portal 7.1
  • [FACES-3348] - Selecting non-existent option for autoComplete (with ajax) causes non-Ajax submit on Liferay Portal 7.1
  • [FACES-3342] - JSP JSTL fails in Liferay Portal 7.1.10 GA1 + FP1

Please report any issues with this new release in JIRA and ask any questions in our forums.

Kyle Joseph Stiemann 2018-09-17T16:35:00Z
Categories: CMS, ECM

CiviCamp Hartford is in less than a week!

CiviCRM - Mon, 09/17/2018 - 11:18

CiviCamp Hartford 2018 is coming this coming Monday, 9/24, and for only $30 will feature:

Categories: CRM

CiviCRM - Android mobile App - Smartcivi

CiviCRM - Sun, 09/16/2018 - 19:22

 

I had always been thinking of developing a mobile app for CiviCRM and in the process of achieving the same, I have released an initial version of my mobile app just for Android named as SMARTCIVI.

What is Smartcivi?

Smartcivi is an Android mobile application for CiviCRM which is used for displaying CiviCRM content in Mobile Application – For now smartcivi is just a read only application.

Categories: CRM

Recognizing Web Content Mismanagement 

Liferay - Sat, 09/15/2018 - 19:24

I’ve been mulling over an oft-encountered requirement, and I figured it would be alright to jot down my thoughts, even if only to evoke a response. Nothing makes my day like a comment that goes, “Uh, actually, there is a better way to do that.”

Here’s the problem definition. It’s quite typical.

  1. Customer has one site. There will be a lot of content on it.

  2. Customer wants all content to be subject to a workflow.

  3. Here’s the important part. Customer wants different users (or groups of users) to be responsible for different content areas.

    1. When IT content gets edited, the IT content contributors should be the ones participating in the workflow to review/approve it.  When Finance content is edited, the Finance content contributors should be the ones participating in the workflow.

  4. The Problem: How can we address the above when a given asset type (in this case, web content) can only be associated with one workflow? Users who have the content reviewer role will be able to review any and all content. So how, do we accomplish what is in 3(a) above.

Now, this is a no-brainer for anyone who’s really gotten into Liferay’s user and site management. But I’ve noticed that a lot of developers who are heads-down in their work, be it portlet implementation, theme-sculpting or other siloed work, don’t get how this can be addressed usually because they have not found the time to read through the documentation.

It is important for us developers to understand how Liferay’s site and user management work at a fundamental level because a lot of Liferay’s feature sets are built to address scenarios just like this. Now knowing about these fundamentals can result in solutions that often miss key considerations, or worse, reinvent the wheel. And we all know how that can impact our lives.

Let me get to it. Here are three ways to address the above problem. Nothing earth-shattering here.

The First Way

Add smarts to your workflow. We have some fantastic documentation on Liferay workflows that truly demonstrates the sky is the limit. With cleverly defined states and transitions and carefully written Java code running inside <condition> and <assignments> tags, it should be easy to accomplish the above. One approach goes like this:

  • Define a bunch of Regular Roles, one for each team. They don’t have to have any permissions. They would just be marker roles. E.g. Finance Reviewer, IT Reviewer, etc.

  • Assign users these roles as needed.

  • Use categories or tags or custom fields to organize your content in some standard conventional way. E.g. Finance content has the category Finance.

  • In your workflow, write the Java you need to examine the content’s category and then assigns the content item to the corresponding marker role. if category == Finance then role = Finance Reviewer

Here is some documentation (with code) describing this exact scenario.

Pros

  • One site to have it all. One workflow to rule them all.

  • Ultimate developer flexibility. Do whatever you need to. Just code it up.

Cons

  • Ultimate developer flexibility. Do whatever you need to. Just code it up.

This can be a problem. Think about when a developer leaves, and has not really transitioned his skills or knowledge to anyone else on the team. The ramp-up time for someone new to all this can be worrisome.  Add to that the possibility you have about 30-40 teams, hence 30-40 content areas and reviewer roles. Now, maybe the developer followed clever coding conventions so what would have been 500 lines of code got done in 50. But that sort of cleverness is only going to make it harder for the next guy to unravel what is really going on. Add to that any special handling for some of those roles. Anyway, you can use your imagination to come up with scary scenarios.

So, am I belittling the script soup that comes with the workflow en·trée. Far from it. I think it’s delicious, if served in small portions. Hence the next two sections.

The Second Way - A Site Hierarchy

Yes. Define a hierarchy of Sites.

  • The parent site at the top is the main site. The parent site will have all the necessary pages containing a meaningful combination of Web Content Display portlet instances and Asset Publisher portlet instances.

  • The parent site may have some content, or none at all. The purpose of the parent site is to be. And a bit more, to serve as a coming-together ground for content from all the child sites.

  • Each child site reflects a team. Of course, each has its own content repository.

  • Site members of the child site contribute content.

  • Each child site still uses the same workflow that all its sibling sites use. But remember: each child site has its own Site Content Reviewer role. So, only members of the child site are candidates for reviewing content in it.

So, all that requirement noise from The First Way, such as Finance content must be reviewed by a Finance reviewer, gets muted.

  • We have a Finance site with users. Some of them are content reviewers. The workflows just work.

  • And when we do need some smarts added to the workflows via scripts, we add those in. E.g. if the content has a category Confidential, assign it to the specific user in this site having the category Department Head.

Small portions help avoid bloating.

The child sites may have pages with various content portlets on them, but none of the pages are served outside of the child site. So, we have some pretty sweet insulation here. It should be pretty clear by now what the portal architects had in mind.

The Third Way - An Organization Hierarchy

You remember the Users and Organizations section in the Control Panel. There really is an Organizations tab on that screen.

Define a hierarchy of Organizations that reflects the organization structure of the enterprise, more or less.

What is an Organization anyway? It’s basically a way to group users into an organization unit. Departments can translate well to Organizations. See the this awesome wiki article on Organizations, and how they’re different from User Groups.

https://dev.liferay.com/en/discover/portal/-/knowledge_base/7-1/organizations

Here are the salient points.

  • An Organization can be assigned users - that’s the whole point.

  • Each Organization can have an Organization Administrator designated for it. These privileged users can add users to the Organization or edit information for the existing ones, or remove them.

  • When you define an organization, Liferay gives you an option to Create a site for it.

    • So, if you do that for all the organizations in your hierarchy, you get an implicit Site hierarchy (much like The Second Way) wherein the Organization Administrator is, implicitly, the Site Administrator.

And with that, everything we said in The Second Way comes into play. The Third Way is essentially the Second Way with Organizations in the mix.

Now, I’ve noticed a few points of interest with an Organization hierarchy owing to the implicitness of the site associated with it (i.e. if one was chosen to be created). But, I’m not going to bring any of that up in this post because I don’t think they pose practical problems. I’m hoping someone will call out what they believe are the real problems, if any.

After all, this, like everything else, is just an elaborate exercise to bring us the words, “Uh, actually, there is a better way to do that.”

Javeed Chida 2018-09-16T00:24:00Z
Categories: CMS, ECM

Serverless: A Game Changer for Data Integration

Talend - Fri, 09/14/2018 - 17:46

The concept of cloud computing has been around for years, but cloud services truly became democratized with the advent of virtual machines and the launch of Amazon Elastic Compute in 2006.

Following Amazon, Google launched Google App Engine in 2008, and then Microsoft launched Azure in 2010.

At first, cloud computing offerings were not all that different from each other. But as with nearly every other market, segmentation quickly followed growth.

In recent years, the cloud computing market has grown large enough for companies to develop more specific offers with the certainty that they’ll find a sustainable addressable market. Cloud providers went for ever more differentiation in their offerings, supporting features and capabilities such as artificial intelligence/machine learning, streaming and batch, etc.

The very nature of cloud computing, the abundance of offerings and the relative low cost of services took segmentation to the next level, as customers were able to mix and match cloud solutions in a multi-cloud environment. Hence, instead of niche players addressing the needs of specific market segments, many cloud providers can serve the different needs of the same customers.

Introduction to Serverless

The latest enabler of this ultra-segmentation is serverless computing. Serverless is a model in which the cloud provider acts as the server, dynamically managing the allocation of resources and time. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity.

With this model, server management and capacity planning decisions are hidden from users, and serverless code can be used in conjunction with code deployed in microservices.

As research firm Gartner Inc. has pointed out, “serverless computing is an emerging software architecture pattern that promises to eliminate the need for infrastructure provisioning and management.” IT leaders need to adopt an application-centric approach to serverless computing, the firm says, managing application programming interfaces (APIs) and service level agreements (SLAs), rather than physical infrastructures.

The concept of serverless is typically associated with Functions-as-a-Services (FaaS). FaaS is a perfect way to deliver event-based, real-time integrations. FaaS cannot be thought of without container technologies, both because containers power the underlying functions infrastructure and because they are perfect for long-running, compute intensive workloads.

The beauty of containers lies in big players such as Google, AWS, Azure, Redhat and others working together to create a common container format – this is very different from what happened with virtual machines, where AWS created AMI, VMware created VMDK, Google created Google Image etc. With containers, IT architects can work with a single package that runs everywhere. This package can contain a long running workload or just a single service.

Serverless and Continuous Integration

Serverless must always be used together with continuous integration (CI) and continuous delivery (CD), helping companies reduce time to market. When development time is reduced, companies can deliver new products and new capabilities more quickly, something that’s extremely important in today’s market. CI/CD manages the additional complexity you manage with a fine grained, serverless deployment model. Check out how to go serverless with Talend through CI/CD and containers here.

Talend Cloud supports a serverless environment, enabling organizations to easily access all cloud platforms; leverage native performance; deploy built-in security, quality, and data governance; and put data into the hands of business users when they need it.

Talend’s strategy is to help organizations progress on a journey to serverless, beginning with containers-as-a-service, to function-as-a-service, to data platform-as-a-service, for both batch and streaming. It’s designed to support all the key users within an organization, including data engineers, data scientists, data stewards, and business analysts.

An organization’s data integration backbone has to be native and portable, according to the Talend approach. Code native means there is no additional runtime and no additional development needed. Not even the code becomes proprietary, so there is no lock-in to a specific environment. This enables flexibility, scale and performance.

The benefits of serverless are increased agility, unlimited scalability, simpler maintenance, and reduced costs. It supports a multi-cloud environment and brings the pay-as-you-go model to reality.

The serverless approach makes data-driven strategies more sustainable from a financial point of view. And that’s why serverless is a game changer for data integration. Now there are virtually infinite possibilities for data on-demand. Organizations can decide how, where, and when they process data in a way that’s economically feasible for them.

The post Serverless: A Game Changer for Data Integration appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

jQuery in Liferay 7.0

Liferay - Fri, 09/14/2018 - 14:37
Introduction

So those who know me or have worked with me know that I hate theming.

I do. I find it to be one of the hardest things in Liferay to get right.

I can build modules all day long, but ask me how to make the default buttons increase the height by a couple of pixels and change the color to orange, and I honestly have to start digging through doco and code to figure out how to do it.

Friends that I work with that focus on front end stuff? They run circles around me. Travis and Alex, you guys know I'm referring to you. AMD loader issues? I feel helpless and have to reach out to another friend, Chema, to set me straight.

So recently I'm trying to work with a client who is trying to get a silly jQuery mask plugin to work and they were struggling. They asked for my help.

Well, I got it working through a bunch of trial and error. What I was missing was kind of a straight-forward guide telling me how to build a theme that had jQuery in it (a full jQuery, not the minimal version Liferay needs for Bootstrap) and would allow me to pull in perhaps some jQuery UI, but at the very least I needed to get the Mask plugin to work.

Since I couldn't find that guide, well I just had to write a blog that could be that guide.

Note that I haven't shown this to Travis, Alex or even Chema. I honestly expect when they see what I've done here, they will just shake their heads, but hopefully they might point out my mistakes so I get all of the details right.

Creating The Theme

So I have to rely on the doco and tooling to help me with theme creation. Fortunately we all have access to https://dev.liferay.com/en/develop/tutorials/-/knowledge_base/7-0/themes-generator because that's my go-to starting point.

I used the theme generator, version 7.2.0 (version 8.0.0 is currently beta as I write this, but I'm guessing it is really focused on Liferay 7.1). I pretty much used all of the defaults for the project, targeting Liferay 7.0 and using Styled as my starting point. This gave me a basic Gulp-based theme project which is as good a starting point as any.

I used the gulp build command to get the build directory that has all of the base files. I created the src/templates directory and copied build/templates/portal_normal.ftl over to the src/templates directory.

Adding jQuery

So there's two rules that I know about including jQuery in the theme thanks to my friend Chema:

  1. Load jQuery after Liferay/AUI has loaded the minimal jQuery.
  2. Use no conflict mode.

Additionally there is a best practice recommendation to use a namespace for your jQuery. This will help to ensure that you isolate your jQuery from anything else going on in the portal.

So I know the rules, but knowing the rules and implementing a solution can sometimes seem worlds apart.

In my src/templates/portal_normal.ftl file, I changed the <head /> section to be:

<head> <title>${the_title} - ${company_name}</title> <meta content="initial-scale=1.0, width=device-width" name="viewport" /> <@liferay_util["include"] page=top_head_include /> <script src="https://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript"> // handle the noconflict designation, use namespace dnjq for DN's jQ. dnjq = jQuery.noConflict(true); </script> </head>

Okay, so since I'm doing this just before the end of the closing tag for <head />, I should be loading jQuery after Liferay/AUI has loaded its version. I'm also using no conflict mode, and finally I'm following the best practice and using my own namespace, dnjq.

I can use the gulp deploy command to now build my theme and deploy it to my locally running Liferay 7 instance (because I did the full configuration during project setup). In the console tailing Tomcat's catalina.out file, I can see that my theme is successfully deployed, processed and made available.

I can now create a new page and assign my theme to it. Now anyone who has done this much before, you already know that the page rendered using this simple theme is, well, pretty ugly. I mean, it's missing a lot of the normal sort of chrome I'd expect to see in a base theme including some initial positioning, margins, etc. I know, I know, Styled is meant for the experts like my friends Travis and Alex and any kind of initial defaults for those would just get in their way. For me, though, I'd be much better served if there were some kind of "Styled++" base theme that was somewhere between Styled and Classic (aka Clay Atlas), and honestly closer to the Classic side of the table. But we're not here to talk about that, so let's keep going.

So the page renders its ugly self but it looks like the same ugly self I've seen before, so nothing is broken. I can view source on the page and see that my changes to portal_normal.ftl were included, so that's good. I can even see that my namespace variable is there, so that's good too. So far this seems like a success.

Adding jQuery Mask

So my next step is to include the jQuery Mask Plugin.  This is actually pretty easy to do, I just add the following line after my <script /> tag that pulls in jquery-latest.js:

<script src="http://igorescobar.github.io/jQuery-Mask-Plugin/js/jquery.mask.min.js"></script>

I pulled the URL straight from Igor's site because his demo is working, so I should have no problems.

I use gulp deploy to rebuild the theme and send it to the bundle, the console shows it successfully deploys and my page with my custom theme still renders fine when I refresh the page.

I did see an error in the console:

Mismatched anonymous define() module: function(a){var l=function(b,e,f){...

But it is reportedly coming from everything.jsp (which I didn't touch). So I'm worried about the warning, yes, but still am feeling pretty good about my progress, so on we go.

Testing the Mask

To test, I just created a simple web content. I had to use the "code" mode to get to the HTML fragment view, then I used the following:

<div class="input-group"><label for="date">Date</label>&nbsp;<input class="dn-date" type="text" /></div> <script type="text/javascript"> dnjq(document).ready(function() { dnjq('.dn-date').mask('00/00/0000'); }); </script>

Nothing fancy here, just a test to verify that my theme was going to deliver the goods.

I save the web content then add it to my page and, well, fail.

In the console I can see:

Uncaught TypeError: dnjq(...).mask is not a function at HTMLDocument. (alt:453) at fire (jquery-latest.js:3119) at Object.fireWith [as resolveWith] (jquery-latest.js:3231) at Function.ready (jquery-latest.js:3443) at HTMLDocument.completed (jquery-latest.js:3474)

Nuts. I know this should work, it is working on Igor's page and I haven't really changed anything. So of course mask() is a function.

Diving Into The Source

So I have to solve this problem, so I approach it the same way any developer would, I go and check out Igor's code, fortunately he has shared the project on Github.

I don't have to go very far into the code before I realize what my problem is. Here's the relative fragment, I'll give you a second to look at it and guess where the problem is:

// UMD (Universal Module Definition) patterns for JavaScript modules that work everywhere. // https://github.com/umdjs/umd/blob/master/templates/jqueryPlugin.js (function (factory, jQuery, Zepto) { if (typeof define === 'function' && define.amd) { define(['jquery'], factory); } else if (typeof exports === 'object') { module.exports = factory(require('jquery')); } else { factory(jQuery || Zepto); } }(function ($) {...

I see this and I'm thinking that my problem lies with the AMD loader, or at least my lack of understanding how to get it to correctly deal with my script import. It has stepped in and smacked me around, leaving me standing there holding nothing but a bunch of broken javascript.

AMD Bypass

So "ha ha", I think, because I know how to bypass the AMD loader...

I download the jquery.mask.js file and save it in my theme as src/js/jquery.mask.js. I then change the stanza above to simplify it as:

// UMD (Universal Module Definition) patterns for JavaScript modules that work everywhere. // https://github.com/umdjs/umd/blob/master/templates/jqueryPlugin.js (function (factory, jQuery, Zepto) { factory(jQuery || Zepto); }(function ($) {...

Basically I just strip out everything that might be going to the AMD loader and just get the browser and jQuery to load the plugin.

Retesting the Mask

I change the portal_normal.ftl line for the mask plugin to be:

<script src="${javascript_folder}/jquery.mask.js"></script>

It will now pull from my theme rather than the web and will use my "fixed" version.

So I gulp deploy my theme, it builds, deploys and starts without any problems in the console, so far so good.

I refresh my browser page (I'm using incognito mode, so no cache to worry about). No errors in the console, still looking good.

I test enter some numbers in the displayed input field, and it all seems to work.

Wahoo! Success!

Conclusion

Well, kind of.

I mean, like I said, this is probably not the right way to do all of this. I'm sure my friends Travis, Alex and Chema will point out my mistakes, well after they're done laughing at me.

Until then, I can at least consider this issue kind of closed...

David H Nebinger 2018-09-14T19:37:00Z
Categories: CMS, ECM

Theming in Liferay 7.1

Liferay - Fri, 09/14/2018 - 10:47
Introduction

In this article I’ll try to give you a comprehensive picture of the current state of theming in Liferay 7.1. To do this, I’ll describe the evolution of Liferay theming from the early Bootstrap use in 6.x themes to the introduction of Lexicon for 7.0, as well as identifying and addressing challenges that have been changed in 7.1. Also, I’ll add some practical cases that could help you when building themes and themelets depending on your choices.

The (ab)use of Bootstrap

Before getting into what is Lexicon, we need to talk about Bootstrap. Because it has been used in Liferay since 6.2 — and still used in a way but we’ll see that later — and as web developers we certainly used it at one time or another.

Bootstrap is a CSS framework: it gives you a set of CSS classes ready to use in order to build user interfaces.

Bootstrap is not a design language: it doesn’t provide a proper design system to improve your user experience through consistent interfaces.

And I’m sure that you already experienced the problem that comes with, because the unfortunate consequence can look like this:

Of course, this list is not exhaustive. Sadly, you can imagine a lot more combinations (e.g. with positioning).

It’s a common case to use Bootstrap as a CSS framework and a design language. Bootstrap design system is kind of implicit because its current rules are imprecise. For example:

“Bootstrap includes several predefined button styles, each serving its own semantic purpose, with a few extras thrown in for more control.”

Source: Bootstrap documentation

What does this really mean? In my previous use case of Save/Cancel, is it ok to use .btn-danger when it’s dangerous to cancel and .btn-default when it’s safe within the same application? As it is, we could say that all of our previous interfaces are following Bootstrap rules. But the user experience can become a nightmare.

So in order to create the best UX/UI design for your application, you need a proper design system you can rely on.

Understanding Lexicon Liferay’s own design system

The new design of Liferay 7.0 is not just a new classic theme with a Bootstrap upgrade. Design in Liferay is more important than ever. And based on what we mentioned previously, Liferay needed a proper design system. Consequently, Lexicon has been created.

Lexicon is not a CSS framework: “it is just a set of patterns, rules and behaviors.”

Lexicon is a design language: “a common framework for building interfaces within the Liferay product ecosystem.”

Source: Lexicon documentation

It’s the opposite definition we made about Bootstrap. But unlike it, you can’t be misled and use Lexicon as a CSS framework since it doesn’t provide any ready-to-use code.

To continue with our button example, here’s a sample of what you can find about it with Lexicon:

Source: Lexicon documentation

Lexicon helps designers

If Lexicon is not like Bootstrap, we can compare it to Google Material Design, Microsoft Metro or iOS Design. And just like them, Lexicon is for designers. In order to create the look and feel of an application, designers need two important documentations: the graphic charter — designers may have built themselves, but not necessarily — and the design language system.

Taking our previous example with buttons, the graphic charter would define what is the color and shape of our primary button in order to reflect the graphical identity of the company and/or the product. Whereas the design language would define in which cases to use primary buttons and how to use them in order to ensure an integration with consistency in its ecosystem.

Let’s consider this statement:

Following Material Design System to create the look and feel of an Android application guarantee that your application will integrate properly within Android ecosystem.

Now the same philosophy applies for Liferay:

Following Lexicon Design System to create the look and feel of a Liferay site guarantee that your site will integrate properly within Liferay ecosystem.

More components

The set of components — or pattern library — is defined by the design language system. Yet another part where Bootstrap is misleading in its role. But now with Lexicon, the number of components can be expanded as needed. And most importantly, the list of its components is not bounded to what Bootstrap provides, and thus to Bootstrap upgrades.

A concrete example is the timeline components available in Liferay 7.1 but missing in Bootstrap 4.

Implementing Lexicon Lexicon CSS becomes Clay

In 7.0, Liferay implemented Lexicon to apply its guidelines to a new look and feel and called it Lexicon CSS. But I guess the name added too much confusion. So now in Liferay 7.1, Lexicon’s implementation is Clay.

Where is Bootstrap?

Bootstrap is not gone with the arrival of Lexicon and Clay. Liferay 7.0 used Bootstrap 3 and now Liferay 7.1 uses Bootstrap 4, but to what extent? Where is Bootstrap?

Well, Bootstrap is used as a base framework in Clay, so Clay is an extension of Bootstrap. In other words, Clay provides existing CSS classes from Bootstrap as well as new CSS classes, all of them built to be Lexicon compliant.

For example, if you want do define an HTML element as a fluid container you can use .container-fluid (Bootstrap). But if you want a fluid container that doesn’t expand beyond a set of width you can also use .container-fluid-max-* classes (Clay).

What about icons?

Lexicon provides a set of icons. Clay implements it as a set of SVG icons. But you are free to use another icon library of your choice such as Font Awesome or Glyphicons.

What is Clay Atlas?

Clay Atlas is the default base theme in Liferay.

Because Lexicon is a design language, you could imagine building an equivalent of Clay, either from scratch or on top of your favorite CSS framework (e.g. Bootstrap, Foundation, Bulma) to integrate in another product than Liferay.

For example, we could imagine an implementation called Gneiss with two themes named Alps and Himalayas:

 

 

With this diagram, we highlight the role of each implementation and possibilities that come with:

  • Multiple Lexicon implementations

  • Multiple themes for Lexicon implementations

  • Multiple custom themes extending from a parent theme

In Liferay, you can either build a theme independent of Atlas, or based on Atlas.

Theme implementation concerns

This part will not cover all the steps required to create a theme because Liferay documentation does it properly. We will focus on the building process and some parts that need your attention.

Clay vs Bootstrap

In Liferay 6.2, Liferay used Bootstrap components, and so did you. So when it came to customization, you wanted to customize Bootstrap.

But as we saw in this article, Clay is an extension of Bootstrap, and Liferay is now using Clay components instead of Bootstrap components. So now, you want to use Clay components too and thus, you want to customize Clay instead of Bootstrap.

For example, alerts for error messages with Clay:

<div class="alert alert-danger" role="alert"> <span class="alert-indicator"> <svg aria-hidden="true" class="lexicon-icon lexicon-icon-exclamation-full"> <use xlink:href="${images_folder}/clay/icons.svg#exclamation-full"></use> </svg> </span> <strong class="lead">Error:</strong>This is an error message </div>

Instead of alerts with Bootstrap:

<div class="alert alert-danger" role="alert"> Error: This is an error message </div>

Check out available components on Clay’s site.

Customizing Clay

Even if — for some reason — you still want to use Bootstrap components, you need to customize Clay components because Liferay is using them and consequently your users will experience them. If you’re only customizing Bootstrap components, Clay components would have only a part of the customization and the user experience would be inconsistent (e.g. successful alerts after a save/submission).

Customizing Clay is the same process as Bootstrap: you want to work with SCSS files in order to override variables. You can find these variables in the official GitHub repository.

Do you remember when you could customize Bootstrap online and download the result?

 

For each variable in Bootstrap, you had a corresponding input. So customization was quick and handy.

Guess what? Now there’s an awesome tool like that for Clay called Clay Paver (for Liferay 7.0, but soon for 7.1).

 

 

IMHO, it’s actually better because you can preview the result online while you’re editing.

It’s open source on GitHub and you can run it locally. So if you like it, please star it to support its author Patrick Yeo for providing such a great tool to the community.

Integrating a Bootstrap 4 theme

In this case, you want to use an existing Bootstrap 4 theme and build a Liferay theme from it. You can find examples here where I built Liferay 7.1 themes from startbootstrap.com Bootstrap 4 themes using Liferay documentation.

However, each Bootstrap theme can be built differently so you run into problems when you want to integrate it in a Liferay theme. So, we’re going to take a closer look at some of the potential pitfalls.

Use SCSS files

You need to use the uncompiled version of the Bootstrap theme, i.e. multiple SCSS files.

These SCSS files are included in a subfolder to ${theme_root_folder}/src/css as mentioned in the documentation.

Don’t use a compiled CSS file (e.g. my-bootstrap-theme.css or my-bootstrap-theme.min.css). If you’re working from a Bootstrap theme that doesn’t include SCSS files, you need to ask for them because these files should be provided.

Verify variables

In some cases, a Bootstrap theme is using custom variables. For example, instead of $primary it could be something like $custom-color-primary, and thus $primary is not overridden when integrating your Bootstrap theme in your Liferay theme.

The less painful way to resolve this is to map custom variables with Bootstrap variables in a dedicated file (e.g. _map_variables.scss):

$primary: $custom-color-primary;

This dedicated file is then imported into _clay_variables.scss.

Choose your icon provider

In your Bootstrap 4 theme, there’s a great chance that it doesn’t use Font Awesome 3 which is include in Liferay. In that case, you can:

  • Add Font Awesome (4+) in your theme

  • Downgrade Font Awesome icons (not recommended, can be problematic for theme consistency and if icons don’t exist in Font Awesome 3)

  • Migrate to Lexicon icons

Conclusion

The new UI experience in Liferay 7 is not a simple “modern, refreshed look”. As a developer, we might have felt like it was, considering that Lexicon & Clay were new things introduced by Liferay that didn’t require our attention because Bootstrap’s still here. But there’s so much work and thinking behind website design with Lexicon and Clay that understanding it becomes the key to building and extending Liferay themes. And I hope this article helped you with that.

 

Resources Documentations

Lexicon

Clay

Liferay 7.0 on Themes

Liferay 7.1 on Themes

Videos

Introducing Lexicon: The Liferay Experience System

New Improvements in Lexicon: Features for Admin and Sites

Articles

Lexicon update in Liferay 7.1 from 7.0

Designing animations for a multicultural product

How Good Design Enhances Utility

Others

Clay Paver

Liferay on Dribbble

Liferay Design Site

Louis-Guillaume Durand 2018-09-14T15:47:00Z
Categories: CMS, ECM

Sales stress: 5 ways managers can help their team

VTiger - Fri, 09/14/2018 - 03:40
If you’re a sales manager, we understand that you manage a team that constantly works under high pressure. Every time you set an ambitious goal or create a competitive environment for you team, you trigger stress – that is both helpful and hurtful. When harnessed, the stress helps your team members push themselves to reach […]
Categories: CRM

How to Spot a DevOps Faker: 5 Questions, 5 Answers

Talend - Thu, 09/13/2018 - 18:55

With the rapid growth of DevOps teams and jobs, it follows that there are candidates out there who are inflating–or flat-out faking–their relevant skills and experience. We sat down with Nick Piette, Director of Product Marketing API Integration Products here at Talend to get the inside scoop on how to spot the DevOps fakers in the crowd:

What clues should you look for on a resume or LinkedIn profile that someone is faking their DevOps qualifications?

Nick: For individuals claiming DevOps experience, I tend to look for the enabling technologies we’ve seen pop up since the concept’s inception. What I’m looking for often depends where they are coming from. If I see they have solid programming experience, I look for complimentary examples where the candidate mentions experience with source control management (SCM), build automation or containerization technologies. I’m also looking for what infrastructure monitors and configuration management tools they have used in the past. The opposite is true when candidates come from an operations background. Do they have coding experience, and are they proficient in the latest domain specific languages?

What signs should you look for in an interview? How should you draw these out?

Nick: DevOps is a methodology. I ask interviewees to provide concrete examples of overcoming some of the challenges many organizations run into, how the candidate’s team reduced the cost of downtime, how they handled the conversion of existing manual tests to automated tests, what plans they implemented to prevent code getting to the main branch, what KPIs were used to measure and dashboard. The key is the detail–individuals who are vague and lack attention to detail raise a red flag from an experience standpoint.

 Do you think DevOps know-how is easier to fake (at least up to a point) than technical skills that might be easier caught in the screening/hiring process?

Nick: Yes, if the interviewer is just checking for understanding vs. experience. It’s easier to read up on the methodology and best practices and have book smarts than it is to have the technology experience and street smarts. Asking about both during an interview makes it harder to fake.

How can you coach people who turn out to have DevOps-related deficiencies?

Nick: Every organization is different, so we always expect some sort of deficiency related to the process. We do the best we can to ensure everything here is documented. We’re also practicing what we preach–it’s a mindset and a company policy.

Should we be skeptical of people who describe themselves as “DevOps gurus,” “DevOps ninjas,” or similar in their online profiles?

Nick: Yes. There is a difference between being an early adopter and an expert. While aspects of this methodology have been around for a while, momentum really started over the last couple years. You might be an expert with the technologies, but DevOps is much more than that.

 

 

The post How to Spot a DevOps Faker: 5 Questions, 5 Answers appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Three reasons organizations need self-service integration now

SnapLogic - Thu, 09/13/2018 - 14:11

Consumer behavior today is very different than it was 10 years ago – the rise of cloud applications and mobile apps now let people complete their goals and intents with a simple tap or click. Information is more accessible than ever and complex tasks can be completed effortlessly. A simple example worth repeating – anyone[...] Read the full article here.

The post Three reasons organizations need self-service integration now appeared first on SnapLogic.

Categories: ETL

Blade Extensions and Profiles

Liferay - Thu, 09/13/2018 - 06:53

Hello all!

 

We want our development tools to be flexible and extensible enough to meet our users requirements, and with this in mind, we have developed 2 new concepts for Blade: Extensions and Profiles.

Extensions allow you to develop your own custom commands for Blade, in which Blade will act as a framework and invoke your command when it is specified on the CLI. You, as the developer of the custom command, may choose the command name and help text, and implement it to meet your requirements. We have also included the ability to install and manage custom extensions directly from the Blade CLI, in the form of "blade extension install" and "blade extension uninstall".

Building upon Extensions, we have also created Profiles, which is basically a metadata flag associated with a given Liferay Workspace. When a custom command is created for Blade, it may be associated with a specific profile type (by using the annotation @BladeProfile), and after this custom command is installed for a user, it will be available in any Liferay Workspace associated with that profile. 

To create a workspace associated with a particular profile, Blade may be invoked as "blade init -b foo", foo would be the profile. (We may change this flag to -p / --profile, or add support for both, what do you think?)

If you would like to get started with these features, please look here https://github.com/liferay/liferay-blade-cli/tree/master/extensions for implementation examples (more to come).

Please let us know what you think of these features, if there is anything you would like to see added or changed in their implementation, or if you have comments about Blade in general. Your feedback helps us refine and improve the development experience, and it is very much appreciated!

Thank you,

Chris Boyd

Christopher Boyd 2018-09-13T11:53:00Z
Categories: CMS, ECM

Creating Real-Time Anomaly Detection Pipelines with AWS and Talend Data Streams

Talend - Wed, 09/12/2018 - 13:35
Introduction

Thanks for continuing to read all of our streaming data use cases during my exploration of Talend Data Streams. For the last article of this series, I wanted to walk you through a complete IoT integration scenario using a low consumption device and leveraging only cloud services.

In my previous posts, I’ve used a Raspberry Pi and some sensors as my main devices. This single board computer is pretty powerful and you can install a light version of Linux as well. But in real life, enterprises will probably use System On Chip things such as Arduino, PLC, ESP8266 … Those SOC are less powerful, consume less energy and are mostly programmed in C, C++ or Python. I’ll be using an ESP8266 that has embedded Wi-Fi and some GPIO to attach sensors. If you want to know more about IoT hardware have a look at my last article “Everything You Need to Know About IoT: Hardware“.

Our use case is straightforward. First, the IoT device will send sensor values to Amazon Web Services (AWS) IoT using MQTT. Then we will create a rule in AWS IoT to redirect device payload to a Kinesis Stream. Next, from Talend Data Streams we will connect to the Kinesis stream, transform our raw data using standard components. Finally, with the Python processor, we will create an anomaly detection model using Z-Score and all anomalies will be stored in HDFS.

<<Download Talend Data Streams for AWS Now>>

Pre-requisites

If you want to build your pipelines along with me, here’s what you’ll need:

  • An Amazon Web Services (AWS) account
  • AWS IoT service
  • AWS Kinesis streaming service
  • AWS EMR cluster (version 5.11.1 and Hadoop 2.7.X) on the same VPC and Subnet as your Data Streams AMI.
  • Talend Data Streams from Amazon AMI Marketplace. (If you don’t have one follow this tutorial: Access Data Streams through the AWS Marketplace)
  • An IoT device (can be replaced by any IoT data simulator)
High-Level Architecture

Currently, Talend Data Streams doesn’t feature an MQTT connector. In order to get around this, you’ll find an architecture sample below to leverage Talend Data Streams to ingest IoT data in real-time and storing it to a Hadoop Cluster.

Preparing Your IoT Device

As mentioned previously, I ‘m using an ESP8266 or also called Node MCU, it has been programmed to:

  • Connect to a Wi-Fi hotspot
  • Connect securely to AWS IoT broker using the MQTT protocol
  • Read every second distance, temperature and humidity sensor values
  • Publish over MQTT sensor values to the topic IoT

If you are interested in how to develop an MQTT client on the ESP8266 take a look at this link. However, you could use any device simulator.

IoT Infrastructure: AWS IoT and Kinesis

 

AWS IoT:

The AWS IoT service is a secure and managed MQTT broker. In this first step I’ll walk you through registering your device, generate public/private key and C.A. to connect securely.

First, login to your Amazon Web Services account and look for IoT. Then, select IoT Core in the list.

Register your connected thing. From the left-hand side menu click on “Manage”, select “Things” and click on “Create”.

Now, select “Create a single thing” from your list of options (alternatively you can select “Create many things “for bulk registration of things).

Now give your thing a name (you can also create device types, groups and other searchable attributes). For this example, let’s keep default settings and click on next.

Now to secure the device authentication using the “One-click certification” creation. Click on “Create a Certificate”.

Download all the files, those have to be stored on the edge device and used with MQTT client to securely connect to AWS IoT, click on “Activate” then “Done”.

In order to allow our device to publish messages and subscribe to topics, we need to attach a policy  from the menu. Click on “Secure” and select “Policies”, then click on “Create”.

Give a name to the policy, in action start typing IoT and select IoT. NOTE: To allow all actions, tick the box “Allow” below and click on “Create”.

Let’s attach this policy to a certificate, from the left menu click on “Secure” and select certificate and click on the certificate of your thing.

If you have multiple certificates, click on “Things” to make sure that the right certificate. Next, click on “Actions” and select “Attach Policy”.

Select the policy we’ve just created and click on “Attach”.

Your thing is now registered and can connect, publish messages and subscribe to topics securely! Let’s test it (it’s now time to turn on the ESP).

Testing Your IoT Connection in AWS

From the menu click on Test, select Subscribe to a topic, type IoT for a topic and click on “Subscribe to Topic”. 

You can see that sensor data is being sent to the IoT topic.

Setting Up AWS Kinesis

On your AWS console search for “Kinesis” and select it.

Click on “Create data stream”.

Give your stream a name and select 1 shards to start out. Later on if you add more devices you’ll need to increase the number of shards. Next, click on “create Kinesis stream”.

Ok, now we are all set on the Kinesis part. Let’s return back to AWS IoT, on the left menu click on “Act” and press “Create”.

Name your rule, select all the attributes by typing “*” and filter on the topic IoT.

Scroll down and click on “Add Action” and select “Sends messages to an Amazon Kinesis Stream”. Then, click “Configure action” at the bottom of the page.

Select the stream you’ve previously created, use an existing role or create a new one that can access to AWS IoT. Click on “Add action” and then “Create Rule”.

We are all set at this point, the sensor data collected from the device through MQTT will be redirected to the Kinesis Stream that will be the input source for our Talend Data Streams pipeline.

Cloud Data Lake: AWS EMR

Currently, with the Talend Data Streams free version, you can use HDFS but only with an EMR cluster. In this part, I’ll describe how to provision  a cluster and how to set up Data Streams to use HDFS in our pipeline.

Provision your EMR cluster

Continuing on your AWS Console, look for EMR.

Click on “Create cluster”.

Next, go to advanced options.

Let’s choose a release that is fully compatible with Talend Data Streams. The 5.11.1 and below will do, then select the components of your choice (Hadoop, Spark, Livy and Zeppelin and Hue in my case). We are almost there, but don’t click on next just yet.

In the Edit software settings, we are going to edit the core-site.xml when the cluster is provisioned, in order to use specific compression codecs required for Data Streams and to allow root impersonation.

Paste the following code to the config:

[   {     "Classification": "core-site",     "Properties": {       "io.compression.codecs": "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec", "hadoop.proxyuser.root.hosts": "*", "hadoop.proxyuser.root.groups": "*"     }   } ]

On the next step, select the same VPC and subnet as your Data Streams AMI and click “Next”. Then, name your cluster and click “Next”.

Select an EC2 key pair and go with default settings for the rest and click on “Create Cluster”.  After a few minutes, your cluster should be up and running.

Talend Data Streams and EMR set up

Still on your AWS Console, look for EC2.

You will find 3 new instances with blank names that we need to rename. Then by looking at the security groups you can identify which one is the master node.

Now we need to connect to the master node through SSH (check that your client computer can access port 22, if not add an inbound security rule to allow your IP). Because we need to retrieve Hadoop config files I’m using Cyberduck (alternatively use FileZilla or any tool that features SFTP), use the EC2 DNS for the server, Hadoop as a user and the related EC2 key pair to connect.

Now using your favorite SFTP tool connect to your Data Streams EC2 machine, using the ec2-user (allow your client to access port 22). If you don’t have the Data Streams free AMI yet follow this tutorial to provision one: Access Data Streams through the AWS Marketplace.

Navigate to /opt/data-streams/extras/etc/hadoop. NOTE: The folders /etc/hadoop might not exist in  /opt/data-streams/extras/ so you need to create them.

Restart your Data Streams EC2 machine so that it will start to pick up the Hadoop config files.

The last step is to allow all traffic from Data Streams to your EMR cluster and vice versa. To do so, create security rules to allow all traffic inbound on both sides for Data Streams and EMR security groups ID.

Talend Data Streams: IoT Streaming pipeline

<<Download Talend Data Streams for AWS Now>>

Now it’s time to finalize our real-time anomaly detection pipeline that uses Zscore. This pipeline is based on my previous article, so if you want to understand the math behind the scenes you should read this article.

All the infrastructure is in place and required set up is done, we can now start building some pipelines. Now logon to your Data Streams Free AMI using the public IP and the instance ID.

Create your Data Sources and add Data Set

In this part, we will create two data sources:

  1. Our Kinesis Input Stream
  2. HDFS using our EMR cluster

From the landing page select Connection on the left-hand side menu and click on “ADD CONNECTION”.

Give a name to your connection, and for the Type select “Amazon Kinesis” in the drop-down box.

Now use an IAM user that has access to Kinesis with an Access key. Fill in the connection field with Access key and Secret, click on “Check connection” and click on “Validate”. Now from the left-hand side menu select Datasets and click on “ADD DATASET”. 

Give your dataset a name and select the Kinesis connection we’ve created before from the drop-down box. Select the region of your Kinesis stream then your Stream, CSV for the format and Semicolon for the delimiter. Once that is done, click on “View Sample” then “Validate”. 

Our input data source is set up and our samples are ready to be used in our pipeline. Let’s create our output data source connection, on the left-hand-side menu select “CONNECTIONS”, click on “ADD CONNECTION”, give a name to your connection. Then select “HDFS” for the type, use “Hadoop as User” name and click on “Check Connection”. If it says it has connected, then click on “Validate.

That should do it for now, we will create the dataset within the pipeline, but before going further make sure that the Data Streams AMI have access to all inbound traffic to EMR Master and Slave nodes (add an inbound network security rule for EMR ec2 machine to allow all traffic from Data Streams Security group) or you will not be able to read and write to the EMR cluster.

Build your Pipeline

From the left-hand side menu select Pipelines, click on Add Pipeline.

In the pipeline, on the canvas click Create source, select Kinesis Stream and click on Select Dataset.

Back to the pipeline canvas you can see the sample data at the bottom. As you’ve noticed incoming IoT messages are really raw at this point, let’s convert current value types (string) to number, click on the green + sign next to Kinesis component and select the Type Converter processor. 

Let’s convert all our fields to “Integer”. To do that, select the first field (.field0) and change the output type to Integer. To change the field type on the next fields, click on NEW ELEMENT. Once you have done this for all fields, click on SAVE. 

Next to the Type Converter processor on your canvas, click on the green + sign and add a Windows processor, in order to calculate a Z-Score, we need to define a processing window.

Now let’s set up our window. My ESP8266 sends sensor values every second, and I want to create a Fixed Time window that contains more or less 20 values, so I’ll choose Window duration = Window slide length = 20000 ms— don’t forget to click Save. 

Since I’m only interested about Humidity, which I know is in field1, I’ll make things easier for myself later by converting the humidity row values in my window into a list of values (or array in Python) by aggregating by the field1 (humidity) data. To do this, add an aggregation processor next to the Window Z-Score component. Within the aggregation processor, choose .field1 as your Field and List as the Operation (since you will be aggregating field1 into a list). 

The next step is to calculate Z-score for humidity values. In order to create a more advanced transformation, we need to use the Python processor, so next to the Aggregate processor add a Python Row processor.

Change the Map type from FLATMAP to MAP, click on the 4 arrows to open up the Python editor and paste the code below and click SAVE. In the Data Preview, you can see what we’ve calculated in the Python processor: the average humidity, standard deviation and Z-Score array and humidity values for the current window.

Even if the code below is simple and self-explanatory, let me sum up the different steps:

  • Calculate the average humidity within the window
  • Find the number of sensor values within the window
  • Calculate the variance
  • Calculate the standard deviation
  • Calculate Z-Score
  • Output Humidity Average, Standard Deviation, Zscore and Humidity values.
#Import Standard python libraries import math #average function def mean(numbers):     return float(sum(numbers)) / max(len(numbers), 1) #initialize variables std=0 #Load input list #average value for window avg=mean(input['humidity']) ##lenth window mylist=input['humidity'] lon=len(mylist) # x100 in order to workaround Python limitation lon100=100/lon #Calculate Variance for i in range(len(mylist)):     std= std + math.pow(mylist[i]-avg,2) #Calculate Standard deviation    stdev= math.sqrt(lon100*std/100) #Re-import all sensor values within the window myZscore=(input['humidity']) #Calculate Z-Score for all sensor value within the window for j in range(len(myZscore)):     myZscore[j]= (myZscore[j]-avg)/stdev #Ouput results output['HumidityAvg']=avg output['stdev']=stdev output['Zscore']=myZscore

If you open up the Z-Score array, you’ll see Z-score for each sensor value.

Next to the Python processor add a Normalize processor to flatten the python array into records, in the column to normalize type Zscore and select is list option then save.

Let’s now recalculate the initial humidity value from the sensor, to do that we will a python processor and write the below code :

#Ouput results output['HumidityAvg']=input['HumidityAvg'] output['stdev']=input['stdev'] output['Zscore']=input['Zscore'] output['humidity']=round(input['Zscore']*input['stdev']+input['HumidityAvg'])

 

Don’t forget to change the Map type to MAP and click save. Let’s go one step further and select only the anomalies, if you had a look at my previous article, anomalies are Zscores that are outside the -2 Standard Deviation and +2 Standard deviation range, in our case the range is around -1.29 and +1.29. And now add a FilterRow processor, the product doesn’t allow us yet to filter on range of values, so we will filter the Absolute value of the Zscore that are superior to 1.29, we test on absolute value because Zscore can be negative.

The last output shows that 5 records that are anomalies out of the 50 sample records. Let’s now store those anomalies to HDFS, click on “Create a Sink” on the canvas an click on “Add Dataset”. Set it up as per below and click on Validate.

You will end up with an error message, don’t worry it’s just a warning Data Streams cannot fetch sample of a file that has not been created yet. We are now all set, let’s run the pipeline by clicking on the play button on the top.

Let’s stop the pipeline and have a look at our cluster, using Hue on EMR you can easily browse hdfs, go to user/Hadoop/anomalies.csv. Each partition file contains records that are anomalies for each processing windows.

There you go! We’ve built our Anomaly Detection Pipeline with Talend Data Streams, reading sensor values from a SOC based IoT device and only using cloud services. The beauty of Talend Data Streams is that we accomplished all of this without writing any code (apart from the Zscore calculation). I’ve only used the beautiful web UI.

To sum up, we’ve read data from Kinesis, used Type Convertor, Aggregation and Window processors to transform our raw data and then Python row to calculate Standard Deviation, Average and Z-Score for each individual humidity sensor readings. Then we’ve filtered out normal values and stored anomalies in HDFS of an EMR cluster.

That was my last article on the Data Streams for the year. Stay tuned, I’ll write the next episodes when it becomes GA in the beginning of 2019. Again, Happy Streaming!

The post Creating Real-Time Anomaly Detection Pipelines with AWS and Talend Data Streams appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Announcing CiviCRM 5.5 Release

CiviCRM - Wed, 09/12/2018 - 07:58
This latest CiviCRM 5.5 release is now ready to download.  RELEASE NOTES: Big thanks to Andrew Hunt from AGH Strategies for putting up together release notes for this version.  The release notes for 5.5 can be accessed here.   SPECIAL THANKS:
Categories: CRM

Notice: Repository CDN URL Change

Liferay - Wed, 09/12/2018 - 07:53

Just a quick blog in case it hasn't come up before...

Liferay was using a CDN for offloading traffic for the repository artifacts. You've likely seen the URL during builds or within your build scripts/configuration.

The old one is of the form:

https://cdn.lfrs.sl/repository.liferay.com/nexus/content/groups/public/...

Recently though Liferay switched to a new CDN provider and are using a newer URL. You might have seen this if you have upgraded your blade or have started working with 7.1.

The new one is of the form:

https://repository-cdn.liferay.com/nexus/content/groups/public/...

If you're using the old one, I would urge you to change to the new version. I haven't heard about if or when the old one will be retired, but you don't want to find out because your build server starts kicking out error emails at two in the morning.

  • If you are using Ant/Ivy, check the ivy-settings.xml for this URL and change it there.
  • If you are using Maven, check your poms and your master settings.xml in your home directory's hidden .m2 folder.
  • If you are using Gradle, check your settings.gradle file, your build.gradle files and possibly your gradle.properties file.

While the transition was in process, a number of times I and others recommended just taking out the cdn.lfrs.sl portion of the URL and go straight to repository.liferay.com. It is the non-CDN version. You too should change your URLs to respository-cdn.liferay.com also. Liferay is planning at some point to blocking public connections to the repository (it slows their internal build processes with so many users hitting the repository directly) although I have no idea when or if this will happen.

But again, you don't want to find out it happened when your build server starts failing at 2am...

David H Nebinger 2018-09-12T12:53:00Z
Categories: CMS, ECM

Devcon 2018

Liferay - Wed, 09/12/2018 - 03:25

Did you already book your ticket for Devcon 2018? Early Bird ends in a few hours (14 Sep 2018) and I hear that the Unconference is solidly booked (not yet sold out, but on a good path to be sold out very soon).

If you have or have not been at a past Devcon, but need more reasons to come again: The agenda is now online, together with a lot of information and quotes from past attendees on the main website. You'll have a chance to meet quite a number of Liferay enthusiasts, consultants, developers and advocates from Liferay's community. Rumors (substantiated in the agenda) are that David Nebinger will share his experience on Upgrading to Liferay 7.1, and is able to do so in 30 minutes. And if you've ever read his blogs or the forum posts, you know that he's covering a lot of ground and loves to share his knowledge and experience. And I see numerous other topics, from the Developer Story to React, from OSGi to Commerce and AI.

Any member of Liferay's team that you see at Devcon is there for one reason: To talk to you. If you've ever wondered about a question, wanted to lobby for putting something on the roadmap, or just need a reason for a certain design decision: That's the place where you can find someone responsible, on any level of Liferay's architecture or product guidance. Just look at the names on the agenda, and expect a lot more of Liferay staff to be on site in addition to those named. And, of course, a number of curious and helpful community members as well.

And if you still need yet another reason: Liferay Devcon is consistently the conference with - by far - the best coffee you can get at a conference. The bar is high, and we're aiming at surpassing it again with the help of Thomas Schweiger.

(If you're more interested in business- than this nerd-stuff, we have events like LDSF in other places in Europe. If Europe is too far, consider NAS or another event close to you. But if this nerdy stuff is for you, you should really consider to come)

(Did I mention that the Unconference will sell out? Don't come crying to me if you don't get a ticket because you were too late. You have been warned.)

 

(Images: article's title photo: CC-by-2.0 Jeremy Keith,  AMS silhouette from Devcon site)

Olaf Kock 2018-09-12T08:25:00Z
Categories: CMS, ECM

A welcome return to the Constellation Research iPaaS ShortList

SnapLogic - Tue, 09/11/2018 - 18:55

Constellation Research, an analyst firm that covers the IT space, has once again named SnapLogic to their Integration Platform as a Service (IPaaS) ShortList. As a perennial member of this ShortList, it remains an honor and a reminder of how competitive the integration space has been and continues to be. Constellation Research founder, Ray Wang,[...] Read the full article here.

The post A welcome return to the Constellation Research iPaaS ShortList appeared first on SnapLogic.

Categories: ETL

Contact Layout Editor: A MIH Success Story

CiviCRM - Tue, 09/11/2018 - 15:39

It's here! Thanks to over $16,000 contributed by 24 individuals and organizations, we are pleased to announce the official release of the Contact Layout Editor. The success of this Make-It-Happen campaign is a tribute to the power of the CiviCRM user community coming together to create something that benefits everyone.

Categories: CRM

Key Considerations for Converting Legacy ETL to Modern ETL

Talend - Tue, 09/11/2018 - 14:25

Recently, there has been a surge in our customers who want to move away from legacy data integration platforms to adopting Talend as their one-stop shop for all their integration needs. Some of the organizations have thousands of legacy ETL jobs to convert to Talend before they are fully operational. The big question that lurks in everyone’s mind is how to get past this hurdle.

Defining Your Conversion Strategy

To begin with, every organization undergoing such a change needs to focus on three key aspects:

  1. Will the source and/or target systems change? Is this just an ETL conversion from their legacy system to modern ETL like Talend?
  2. Is the goal to re-platform as well? Will the target system change?
  3. Will the new platform reside on the cloud or continue to be on-premise?

This is where Talend’s Strategic Services can help carve a successful conversion strategy and implementation roadmap for our customers. In the first part of this three-blog series, I will focus on the first aspect of conversion.

Before we dig into it, it’s worthwhile to note a very important point – the architecture of the product itself. Talend is a JAVA code generator and unlike its competitors (where the code is migrated from one environment to the other) Talend actually builds the code and migrates built binaries from one environment to the other. In many organizations, it takes a few sprints to fully acknowledge this fact as the architects and developers are used to the old ways of referring to code migration.

The upside of this architecture is that it helps in enabling a continuous integration environment that was not possible with legacy tools. A complete architecture of Talend’s platform not only includes the product itself, but also includes third-party products such as Jenkins, NEXUS – artifact repository and a source control repository like GIT. Compare this to a JAVA programming environment and you can clearly see the similarities. In short, it is extremely important to understand that Talend works differently and that’s what sets it apart from the rest in the crowd.

Where Should You Get Started?

Let’s focus on the first aspect, conversion. Assuming that nothing else changes except for the ETL jobs that integrate, cleanse, transform and load the data, it makes it a lucrative opportunity to leverage a conversion tool – something that ingests legacy code and generates Talend code. It is not a good idea to try and replicate the entire business logic of all ETL jobs manually as there will be a great risk of introducing errors leading to prolonged QA cycles. However, just like anyone coming from a sound technical background, it is also not a good idea to completely rely on the automated conversion process itself since the comparison may not always be apples to apples. The right approach is to use the automated conversion process as an accelerator with some manual interventions.

Bright minds bring in success. Keeping that mantra in mind, first build your team:

  • Core Team – Identify architects, senior developers and SMEs (data analysts, business analysts, people who live and breathe data in your organization)
  • Talend Experts – Bring in experts of the tool so that they can guide you and provide you with the best practices and solutions to all your conversion related effort. Will participate in performance tuning activities
  • Conversion Team – A team that leverages a conversion tool to automate the conversion process. A solid team with a solid tool and open to enhancing the tool along the way to automate new designs and specifications
  • QA Team – Seasoned QA professionals that help you breeze through your QA testing activities

Now comes the approach – Follow this approach for each sprint:

Categorize 

Analyze the ETL jobs and categorize them depending on the complexity of the jobs based on functionality and components used. Some good conversion tools provide analyzers that can help you determine the complexity of the jobs to be converted. Spread a healthy mix of varying complexity jobs across each sprint.

Convert 

Leverage a conversion tool to automate the conversion of the jobs. There are certain functionalities such as an “unconnected lookup” that can be achieved through an innovative method in Talend. Seasoned conversion tools will help automate such functionalities

Optimize

Focus on job design and performance tuning. This is your chance to revisit design, if required, either to leverage better component(s) or to go for a complete redesign. Also focus on performance optimization. For high-volume jobs, you could increase the throughput and performance by leveraging Talend’s big data components, it is not uncommon for us to see that we end up completely redesigning a converted Talend Data Integration job to a Talend Big Data job to drastically improve performance. Another feather in our hat where you can seamlessly execute standard data integration jobs alongside big data jobs.

Complete 

Unit test and ensure all functionalities and performance acceptance criteria are satisfied before handing over the job to QA

QA 

An automated QA approach to compare result sets produced by the old set of ETL jobs and new ETL jobs. At the least, focus on:

  • Compare row counts from the old process to that of the new one
  • Compare each data element loaded by the load process to that of the new one
  • Verify “upsert” and “delete” logic work as expected
  • Introduce an element of regression testing to ensure fixes are not breaking other functionalities
  • Performance testing to ensure SLAs are met

Now, for several reasons, there can be instances where one would need to design a completely new ETL process for a certain functionality in order to continue processing data in the same way as before. For such situations, you should leverage the “Talend Experts” team that not only liaisons with the team that does the automated conversion but also works closely with the core team to ensure that, in such situations, the best solution is proposed which is then converted to a template and provided to the conversion team who can then automate the new design into the affected jobs.

As you can see, these activities can be part of the “Categorize” and “Convert” phases of the approach.

Finally, I would suggest chunking the conversion effort into logical waves. Do not go for a big bang approach since the conversion effort could be a lengthy one depending on the number of legacy ETL jobs in an organization.

Conclusion:

This brings me to the end of the first part of the three-blog series. Below are the five key takeaways of this blog:

  1. Define scope and spread the conversion effort across multiple waves
  2. Identify core team, Talend experts, a solid conversion team leveraging a solid conversion tool and seasoned QA professionals
  3. Follow an iterative approach for the conversion effort
  4. Explore Talend’s big data capabilities to enhance performance
  5. Innovate new functionalities, create templates and automate the conversion of these functionalities

Stay tuned for the next two!!

The post Key Considerations for Converting Legacy ETL to Modern ETL appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Building charts for multiple products

Liferay - Tue, 09/11/2018 - 03:28

With Liferay releasing new products such as Analytics Cloud and Commerce we decided to cover the need for charts by providing an open source library.

 

The technology

Clay, our main implementation of Lexicon, created Clay charts. These charts are built on top of Billboard.js where many contributions have been done by Julien Castelain and other Liferay developers. Billboard.js is an adaptation layer on top of D3.js, probably the most known and used for data visualization these days.

 

The issue

Although Billboard.js is a very good framework, it was not covering all our needs in terms of interaction design and visual design. Therefore, we have been working on top of it, contributing some work and keeping the rest of it inside Lexicon and Clay.

Improving accessibility

Improving the accessibility aspect for different charts was one of our first contributions to Billboard.js. We provided 3 different possible properties that help to differentiate the data before having to include colors.

 

  • 9 different dashed strokes styles for the line charts that helps to follow the shape of each line.   

  • 9 different shapes to use as dots inside the line charts and the legend that helps to read the points in each line.

  • 9 different background patterns to be used on shaped charts like the doughnut chart or the bar chart, adding this property to a chart background helps to recognise the different shapes even if the colors are similar.

Here is a clear example so you can see how the user would perceive, read and follow the different data from the line chart, without the direct use of colors.

Creating a color palette

Color is one of the first properties that users would perceive along shapes and lines, making it our next priority. We needed a flexible color palette that allowed us to represent different types of data.

This set is composed by 9 different and ordered colors that are meant to be used in shaped charts as background or in line charts as borders.

Each of these colors can be divided into 9 different shades using a Sass generator. It is useful to generate a gradient chart to cover all the standard situations for charts. 
Here’s an example using the color blue:

Ideally, to take advantage of these colors use the charts over a white background.

Warning: using these colors for texts won’t reach the minimum contrast ratio requested by W3C. Using a legend, tooltips and popovers to provide text information is the best course of action.

  Introducing a base interaction layer

The idea behind the design of these interactions is to provide a consistent and simple base for all charts. This increases predictability and reduces the learning curve.

These interactions are based on the main events (click/hover/tap) applied to the common elements in our charts: axis, legend, isolated elements or grouped elements.
We also reinforce the visualization, with highlights, between related information and extended information, displayed through popovers.

As you can see in the example below, the Stacked bar and the Heatmap share the same interaction of the mouse hover to create a highlight on the selected data. This is done without any change to the main color when focusing on an element, but instead decreasing the opacity of the other elements.

In addition to this, each designer can extend these interactions depending on their products as well as working on advanced interactions. So, if they need specific actions such as a different result on hover, data filters, or data settings they can add them to the chart as a customization.

  Conclusion

Working with D3.js allowed us to focus on our project details such as accessibility, colors and interaction, adding quality to the first version of Charts and meeting the deadline at the same time.

Thanks to the collaboration with Billboard.js we were able to help another open source project and as a result, share our work with the world.

You can find more information about Clay, Charts and other components directly inside the  Lexicon Site.

Emiliano Cicero 2018-09-11T08:28:00Z
Categories: CMS, ECM
Syndicate content