AdoptOS

Assistance with Open Source adoption

Open Source News

Kevin Thull, from behind the camera

Drupal - Tue, 04/24/2018 - 08:17

Chances are if you've attended any of the Drupal camps in North America you've run into Kevin Thull. He's the fellow that is dashing from room to room before the first session begins to set up the AV equipment and checking in with presenters making sure they all "push the red button". Because of him, we are all able attend the sessions we miss while busy elsewhere. He is personally responsible for recording over 800 sessions and donating countless hours of his time.

Not only does he record sessions at camps, he also helps organize Midwest Drupal Camp. For this next year he has been charged as their fearless leader. He will be working on their web team, arranging catering, organizing the venue, as well as doing all the audio visual.

This year at DrupalCon Nashville the Drupal Community awarded Kevin the Aaron Winborn award. The Aaron Winborn award is presented annually to an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community. Kevin's commitment to capturing knowledge to share with the whole community is truly inspirational. He has provided a platform that helps tie local Drupal Communities together.

The Drupal Community Spotlight Committee's AmyJune sat with Kevin before Nashville and asked him some questions about contributing to the Drupal Community.

Ironically, AmyJune had chosen to write this spotlight on Kevin a few weeks before DrupalCon. AmyJune had asked him if he was coming to Nashville and he relayed that he had a prior commitment to attend another conference for his job. Unbeknownst to us, during the interview Kevin knew he had been awarded the honor and managed to keep it a secret. While he did mention that the marketing conference only ran through Wednesday, AmyJune was pleasantly surprised to see him take the stage.

Well, not too surprised, after all he truly deserves the honor.

How long have you been involved in the Drupal community?

I’m not involved with Drupal through my employer, I work in Marketing, but I got into Drupal through freelance.

My first meet up was when the Using Drupal 6 book first came out. I would say that is when I first started getting involved in the community. So, that's close to 10 years now.

I started recording Drupal Camps back in 2013. The official Chicago Camp was having issues and so we as a far western Suburban group decided to have our own camp. I thought I could do some of the logistics and session recordings since that's what I do for work. I had the same setup with video cameras in the back of the room and I spent countless hours rebuilding these presentations. It's a similar process, but it's a very a different presentation between a marketer and someone from the Drupal community giving a presentation on diversity. A marketer might have 20 slides, but a Drupal talk may have 104.

Everybody at the time was telling me I was insane for doing this, but my response was, "Nope, it's important."

In 2014 was the first MIDCamp and we were able to get the DA recording kits. But that was not great either. There was a lot of setup, they were expensive to ship them back and forth, they didn't work terribly well, so that's when Avi Schwab ( https://www.drupal.org/u/froboy) and I started collaborating. He did all the setup for the laptops and I did all the running around from room to room and post production. We brainstormed and I started doing research. The next Suburban Camp is when I had my first test kit for what I am using today.

I saw that you recorded Pacific Northwest Drupal Summit remotely this year? Can you share that experience with us?

That's a funny story. It was the same weekend as Jersey Camp and I tend to favor camps I have already recorded. They had committed before Pacific Northwest Drupal Summit and when Amber Matz saw me at BADCAmp, I explained the conflict. I told her I had started working on the next step and would be shipping the kits to camps. I sat with her and showed her how the kit worked and she said it didn't seem too difficult, and we said "Let's do this".

I got a new case, sent 5 kits to them. It's funny how talking with the organizers of camps helps all of this come together. Because later at New England Camp, I was explaining to one of their organizers how I was shipping kits and he suggested labeling the cables. I thought that was brilliant so I got a label maker and labeled all the cables. I wrote out more a detailed instruction guide, and all these things were things I had been meaning to do.

I sent 5 kits, insured FedEx for around $50, whereas the DA sends this giant pelican case that must cost hundreds of dollars. That was part of the plan originally; we wanted something lightweight and easy to use. I heard they had an 84% capture rate which is a great start. The issue is that non-Macs recordings have no sound and so I have to lay up the backup recording into the video. A lot of times that back up recorder gets turned off or stopped for some reason.

While I was in Florida I started working on pinpointing why non-Mac machines don't have audio. Later, I had mixed success at MIDCamp, I captured a couple, some didn't work, one being an Ubuntu build. At lunch I worked with that presenter to test various setups and we found a setup that worked. Once I can crack that nut, then shipping with even more instructions will increase the capture rates.

Now that you're capturing some camps remote, how does that cut into how much you like to travel?

I do like to travel, but there are a couple of issues. A) I can't be everywhere. B) I am potentially doing 13 or 14 camps this year. Which is cool now, but it may not be cool in couple of years. And C) I don't do Drupal at work and when I first starting doing this I was using all my PTO. I don’t do any Drupal at work, but I brought back all kinds of information and my boss recognized that. She said I could count those as remote days, but of course there's a limit.

There is a balance to be found between visiting the camps and sending the kits remotely.

What are some of your favorite camps?

Everybody asks me that, that question is not fair. I like them all. It's generally the places I know the most people and/or I go ahead of time to play before camp starts. I am not a solo traveller, so if I know a lot of people at the camp I tend to like those: Badcamp, Twin Cities, St. Louis, Texas (cuz of Austin), and Montreal.

What are the things you like to do before a camp that makes it more fun?

HaHaHa, eat and drink all the things. Bar Crawls, Food Crawls, you name it.

Have you given any thought to helping with camps outside the States?

I would like to, but it’s a time and cost issue. The camps now reimburse my travel expenses. To fly to a European camp - I don’t know if that would be in their budget.

It’s interesting, Mauricio Dinarte tailed me for a few camps and he wanted, and he did, get some kits to start recording Nicaragua. One day he tweeted that he saw my kits at Drupal Camp Antwerp. It’s cool to see how these things grow organically. There’s not a camp that goes by where someone from the community doesn’t ask me about how everything works.

Congratulations Kevin!

Kevin’s not just the guy who reminds us all to push the red button. He is the guy who loans out his phone when a presenter is doing a live demo and needs an internet hotspot. He is the guy spending hours during and after Drupal Camps piecing together audio and video for maximum quality. The Drupal Community has so much to thank him for, the Aaron Winborn award couldn’t have been awarded to anyone more deserving.

Link to Kevin Thull Youtube acceptance

On Kevin, from the community:

“It has become a no-brainer to invite Kevin to Florida DrupalCamp and have him record and post all of our sessions online. He makes it easy for us to share our great content with a world-wide audience by coming prepared, making it easy for presenters, and uploading the video almost immediately. He’s a true asset to the community.”  - Mike Anello (Florida Camp)

"His never-ending abundance of energy and positive contributions in the form of Drupal Camp video services in the US is unmatched. At the camps where I’ve spoken or helped organize he has been a great person to work with through the whole process - helpful and organized across the board." - Aimee Degnan Hannaford (BADCamp)

“We appreciated Kevin’s willingness to send recording equipment and documentation to our event so that we could record sessions, even though he couldn’t be there. He was encouraging and helpful all along the way.” Amber Matz (PNWDS Portland)

Thank you Kevin for your contribution to community, for sharing your story with us, and for being a most excellent secret keeper! And thank you to the hundreds of volunteers that make Drupal Camps, Cons, meetups and picnics a success every year. And thank you AmyJune for this most excellent Drupal Community Spotlight article!

Top image credit: Image by Jordana F

Categories: CMS

3 Tactics That May Combat Card-Not-Present Fraud

PrestaShop - Tue, 04/24/2018 - 06:59
A person opens a website or mobile app and navigates to its product listings.
Categories: E-commerce

Achieving “mobile by default” in the public sector

Liferay - Tue, 04/24/2018 - 05:53
In 2000 approximately 27% of the UK population was using the Internet; By 2010 that that figure had risen to 85%. Something fundamentally changed – PC sales plummeted as smartphones such as Blackberry and iPhone (launched 2007) began offering consumers access to apps, email and the web wherever and whenever they wanted. Today affordable mobile data and smartphones have made mobile internet the first choice for the 98% of individuals who use the Internet.      Source: Statista   For public sector organisations, delivering Digital By Default means providing accessible and high-quality digital services to citizens of all ages through the digital touchpoints they use in the course of their daily lives. 91% of UK citizens use their smartphone every day (Deloitte), but with up to 48,687 hardware models and 907,076 combinations of browser, operating system and hardware (51 Degrees) it’s a complex and ever-changing target for the DWP or GDS teams to address, let alone a smaller departmental web team.    Source: ONS 2017   Delivering an attractive mobile experience efficiently is arguably the biggest priority, but also most significant challenge. There is good reasoning behind the GDS advice “Don’t build apps”: The cost and complexity of building an iOS app in isolation, only to repeat the exercise for Android and maintain these alongside responsive websites and portals is prohibitive without an omnichannel approach. So how can effective mobile experiences be delivered?   Planning a mobile strategy In 2018, the mobile landscape has evolved multiple times since the consumer shift to mobile began. Today there are many effective ways to address the need for rapid development and more sustainable mobile services, providing the organisations with the flexibility to optimise the cost and user experience required for the specific project.   1. Mobile responsive web pages
 Legacy portal and Content Management System (CMS) software may deliver a poor web experience, with a front-end that was built for web PC browsers or a generic mobile-optimised view. Modern websites and templates often employ a responsive HTML5 template and rely on Javascript templating frameworks such as Bootstrap to optimise the way content is presented to the user when the page is loaded on their device based on their screen size and orientation.    This can be also be achieved by a mobile adaptive approach, where the server and browser detect the user’s device and the web server only sends the most appropriate layouts and assets based on rules. This approach is particularly effective for delivering an optimal user experience to users with limited bandwidth (rural areas or mobile), or lower performance devices because large images or javascript used to deliver enhanced user experience on high-end devices can be eliminated. Adaptive is useful for retrofitting an existing site in order to make it more mobile-friendly.   Regardless of the implementation, the user experience of this approach is limited by the lack of access to device capabilities or other apps and varies with each browser and version. It’s ideal for publishing content and services not requiring interaction or authentication.   2. Hybrid apps Hybrid apps use Web technology to deliver information to users. Content is developed using standard programming languages such as HTML, JavaScript and CSS so that it’s easy for web developers to build content. Content managed centrally in a web platform translates to lower costs and a potentially faster development cycle.   Here comes the clever bit. Those web pages are then loaded within a native app that the user can download and install, and the hybrid app framework provides access to the native device capabilities and UX features through an “app shell”.    The developer can improve user satisfaction by tailoring the app to each device’s native capabilities, but it the approach doesn’t guarantee the best performance or user experience. Hybrid mobile apps are an ideal choice for delivering good quality services cost-effectively when the user interface is not critical to success, “disposable apps” such as fund-raising and event apps or as a very capable interim mobile while evaluating the decision to “go native”.   This approach is recognised by Google as a Progressive Web App, and delivering certain mobile-focussed criteria such as push notifications and adding icons to the home screen will also help validate your Web app as a mobile site in the search engine so users can find it easily.   3. Native apps The native apps downloaded from app stores are developed specifically for a platform (such as iOS, Android) using different programming languages. This introduces a requirement for new specialist skills and experience, or to outsource – which may be costly and have implications for the future sustainability of the IT strategy.    Native apps can also be costly to maintain because they utilise separate back-end services to provide identity management that must be connected with the core systems of record to deliver services of real value such as payments, electronic registrations or paperless forms.     So what are the advantages of going native? Developing a native app is important for those challenges that have intensive use of native device capabilities. For citizens, this could include remote patient monitoring or transport information (for staff the digital transformation possibilities are broad ranging from such as field operators in maintenance roles, social services, social care or emergency services).    Laying new foundations New technology is starting to level the playing field. Digital Experience Platforms (DXP) connect together the back-end systems required for serf service to deliver omnichannel digital services from a common platform. This approach significantly reduces the complexity of development and ongoing maintenance for omnichannel digital services.    Native apps will deliver the best mobile user experience for customers, and if that’s worth investing in for your company, then the native app is an option you will want to explore. Organisations can rapidly develop a native iOS or Android and mobile app from a single codebase using a cross-platform framework such as Xamarin, which connects to a DXP to access all same back-end systems made available through the website or a kiosk.    Times have changed and technology has matured. With the right long-term strategy providing high-quality mobile services can be affordable and sustainable. If you would like more tips, tools and advice to help you weigh up the pros and cons of mobile strategies Liferay has compiled a useful guide that you can download below.   Explore strategies for sustainable mobile services

Delivering mobile experiences that are effective for citizens, but are sustainable to meet future needs is challenging. Learn to select the right strategy can quickly drive value with our guide.

Read our Mobile Strategy guide   Robin Wolstenholme 2018-04-24T10:53:05Z
Categories: CMS, ECM

KPI Module for CiviCRM

CiviCRM - Tue, 04/24/2018 - 03:43

Categories: CRM

How To Use Visuals Through A Buyer’s Journey

PrestaShop - Mon, 04/23/2018 - 09:31
How To Use Visuals Through A Buyer’s Journey
Categories: E-commerce

Intranet Software

Liferay - Fri, 04/20/2018 - 17:07

Existem dezenas de software de intranet disponíveis para empresas, e escolher o caminho certo pode ser um desafio, especialmente para aqueles que estão lançando uma intranet pela primeira vez. Qualquer processo de seleção deve começar com uma compreensão abrangente do que seus funcionários precisam. Eles estão procurando uma maneira melhor de colaborar online? Eles estão cansados de entrar em três sites diferentes para gerenciar seu horário de trabalho? Avaliar todas estas questões irá ajudá-lo a criar uma lista de prioridades para orientar sua decisão.

No entanto, mesmo com os requisitos em mãos, você ainda se encontra em um mercado competitivo de soluções de intranet.  A maioria dos softwares irão se enquadrar em uma das três categorias a seguir, e delimitar suas opções para uma delas irá agilizar o processo de decisão.

1. O Conjunto de Ferramentas

Se você ainda não tem uma intranet, sua equipe provavelmente já está trabalhando com seu próprio conjunto preferido de ferramentas e aplicativos online. Isso pode incluir armazenar anotações de reuniões no Evernote, agendar reuniões com Doodle, enviar arquivos grandes com o WeTransfer ou usar outros aplicativos de produtividade.

Para as empresas que não precisam de uma solução de intranet, incentivar os funcionários a usar um conjunto padronizado de ferramentas é uma ótima alternativa. Esta abordagem exige apenas que você escolha as ferramentas que os funcionários devem usar e garanta que todos tenham o acesso que precisam.

Atualmente, muitas ferramentas de trabalho online podem ser integradas e montar um conjunto de ferramentas que atenda às necessidades de sua equipe é mais fácil do que gerenciar uma intranet - desde que sua equipe não use mais de uma dúzia de ferramentas ou não tenha requisitos conflitantes. Mas e se as coisas forem um pouco mais complicadas do que isso?

2. A Solução Pronta para Uso

A intranet pronta para uso é ideal para empresas que têm necessidades gerais. Ou seja, seus requisitos são mais complexos para um conjunto de ferramentas integradas atender, mas não tão complicados a ponto de precisar de um desenvolvimento personalizado. Neste cenário, uma solução de intranet que vem com recursos padrão prontos para uso é uma ótima pedida.

Alguns dos recursos disponíveis em uma solução de intranet pronta são:

  • Homepages e news feeds personalizados
  • Video player
  • Bibliotecas de documentos
  • Fluxos de atividade
  • Páginas de grupos
  • Perfis de funcionários e blogs
  • Habilitadas para celular

As soluções prontas para uso são úteis para empresas que contam com tempo, orçamento ou recursos de desenvolvimento limitados. O lançamento de uma intranet é uma coisa, mas a manutenção é outra. Uma solução pronta exigirá menos recursos para se manter atualizada, embora você corra o risco de ser mais dependente do fornecedor e de quaisquer decisões futuras que eles tomem sobre diferentes recursos.

Além disso, uma solução pronta para uso já terá tomado muitas decisões por você, exigindo que você assuma certos aspectos no estado em que se encontra, sem qualquer oportunidade de personalizar. Isso pode ser uma coisa muito boa, especialmente se a sua empresa é nova no design de experiência do usuário (UX). Imagine criar sua própria versão interna do Facebook a partir do zero. Você está pronto para tomar todas as decisões de UX que envolvem o design de uma plataforma social? Usar ferramentas prontas para uso em uma intranet social garantirá que você esteja implantando um site com uma sólida base de experiência do usuário, sem muito investimento. A partir dela, você sempre pode continuar a ajustar de acordo com suas necessidades.

3. A plataforma personalizada

A categoria final de intranet é a plataforma personalizada. Esta é uma solução de plataforma web para grandes empresas que possuem necessidades complexas. Frequentemente, existem demandas específicas do setor, como um banco de varejo que precisa incorporar ferramentas específicas de finanças que seus funcionários usam diariamente. As plataformas personalizadas são sempre um compromisso, mas há muitas coisas que elas fazem bem e que as soluções prontas não conseguem realizar.

Por exemplo, as intranets criadas em plataformas personalizadas estão mais bem equipadas para lidar com processos de negócios complexos. Um Laboratório de Diagnósticos construiu um portal integrado de clientes e uma intranet em torno da ideia de melhorar o processo inconveniente de fazer um exame médico. Para permitir que os funcionários acompanhem estes testes, a intranet exigia integrações que nunca haviam sido feitas antes, como a sincronização do bancos de dados com o USPS, identificação de  informações de pacientes por meio de seu laboratório de diagnóstico e facilitação de login e acesso a resultados de exames para seus médicos. Como eles são um dos poucos (talvez os únicos) laboratórios de diagnóstico a criar uma solução digital para esse processo de teste, não há uma solução de intranet pronta para atender às necessidades deles.

Plataformas personalizadas também têm mais flexibilidade do que outras opções de intranet. Sua intranet deve ser uma extensão da cultura de sua empresa e uma plataforma personalizável permite que você alinhe iniciativas de cultura interna como a primeira coisa que seus funcionários veem na página inicial. Digamos que sua equipe de RH deseje iniciar um desafio de condicionamento físico em que os funcionários definam metas diárias e andem em equipe. Isso pode envolver adicionar conteúdo encorajador em um fórum de discussão apenas para esse desafio, criar fóruns, programar ferramentas para aqueles que quiserem se reunir, criar um aplicativo para dispositivos móveis com um quadro de líderes para que as pessoas possam verificar o processo de outro e assim por diante. As possibilidades são inúmeras para empresas que possuem recursos para investir em grandes experiências para seus funcionários.

O aumento da flexibilidade também é crítico para organizações que têm requisitos que mudam sempre e precisam ser acomodados rapidamente. A QAD, um fornecedor de software de ERP para empresas de manufatura, precisava incluir novos requisitos de negócios em todos os projetos que eles lançam em sua intranet. Como a equipe de TI investiu em sua intranet personalizada, é possível acomodar rapidamente esses novos requisitos sempre que eles surgirem e fornecer uma ferramenta de trabalho exclusiva para os funcionários.

Por fim, se sua empresa estiver planejando incorporar novas tecnologias como IA ou IoT à sua intranet, uma plataforma personalizada permitirá que você planeje como será essa inclusão. A integração com uma solução pronta dependerá de quando o fornecedor deseja habilitar estas tecnologias. Construir com uma intranet personalizável colocará muito mais poder nas mãos da sua empresa.

Comece com o que Você Precisa

Quer você esteja criando uma intranet pela primeira vez ou atualizando um site de longa data, é melhor começar com o que você sabe que precisa. Muitas vezes, os verdadeiros desafios das intranets são convencer seus funcionários a realmente usá-las. Concentrando-se nos aplicativos e ferramentas que você sabe que eles usam diariamente, você pode escolher uma intranet que abordará suas principais dores, garantindo melhor engajamento dos funcionários em todas as áreas.

  Isabella Rocha 2018-04-20T22:07:53Z
Categories: CMS, ECM

Diagnosing and Correcting Your Common SEO Problems

Liferay - Fri, 04/20/2018 - 10:46

Developing a well-structured and great-looking website can help a company provide its target audience with a great online experience. However, without a strong search engine optimization (SEO) strategy, a great website won’t be readily found by potential customers. If you are either in the process of creating a website or have already created one, it is important that you understand what potential problems within your technology strategy may be impacting your site.

While search engine optimization is not necessary when creating a portal, it will still largely impact B2B marketing, B2B ecommerce and websites for healthcare, education, government and more types of organizations. SEO can have an especially large impact on B2C marketing sites, B2C ecommerce and non-profit websites, as these rely on target audiences finding them via search engines. Without proper optimization, well-crafted websites can go unfound and outranked by competitors.

The following SEO issues may affect your site, but most of these problems can be corrected when equipped with the right information. Take the time to see if your site is running into these complications and how their related solutions can be applied to your online presence.

Common Problems

Frequent SEO issues that can be encountered on a website can be plotted on two axes: Warning - Severe and Technical - Content. When mapping out your SEO problems, consider where they fall on this grid in order to better understand when and how you should take action.

The following four types of problems may be affecting your sites and vary in both importance and effects. As such, consider whether your online presence is currently experiencing these issues when developing a plan to improve your SEO strategy.

Technical Warnings

The following issues are caused by technical missteps, but have a less immediate negative impact on a website’s SEO. However, these can harm a site over the long term.

Problem: 302 Redirects Instead of 301 - Programmers will need to decide whether 301 redirects, which indicate a permanent move, or 302 redirects, which is often used for temporary purposes, are right for what is happening with their sites. A 301 redirect will be right in the event that a page will be permanently replaced, redirecting all traffic to the new correct landing page, while a 302 may be more appropriate when a site’s content is being updated and edited. Choosing the right one will keep your page’s search rankings intact despite the changes being made.

  • Solution: Determine the type of move that has happened with your site content and redirect as necessary. Try to prevent switching redirect type by thinking long term.

Problem: Overly Dynamic URLs - Variables and parameters that help to produce URLs may create countless iterations of URLS for the same page, leading to duplicate content and a loss of link value for landing pages.

  • Solution: Fix issues with redirects that use 200 instead of 302, create good looking URLs when possible and always tell search engines what your parameters do.

Problem: Missing Canonical Tags - In the event that your website generates the same or similar content on multiple URLs to create dynamic pages, search engines may become confused during crawling and lead to poor search engine results for your site content.

  • Solution: Canonical tags can group these multiple URLs together and assign a master version, which will be crawled by engines to rank appropriately for target keywords, leading visitors to dynamic content when appropriate after landing on the page.
Content Warnings

These issues also have a less immediate and severe impact on a website’s SEO, but result from how content is created and implemented on a website.

Problem: Bad Search Presentation - If your content is displaying poorly in search results, it can prevent searchers from clicking on your link, despite it ranking for the correct terms. Issues that lead to poor presentation include titles or title tags that are too long, programmatically created titles and missing meta descriptions.

  • Solution: Create a page title attribute that content creators can control and consider giving control over open graph tags as well.

Problem: Too Many Title or Meta Description Tags - It may be tempting to apply as many title and meta description tags as possible in order to give your content a wide scope, but overtagging will not help your site’s SEO.

  • Solution: Keep the number of tags limited and stay focused on the specific keywords of your content. Descriptions should give visitors an accurate overview of your page’s content without being longer than 160 characters.

Problem: URL Too Long - Either due to being automatically generated or manually created, overly long URLs can prevent your content from properly targeting and ranking for your designated keywords.

  • Solution: Create standards for URLs, including limiting the number of subdomains used, excluding dynamic parameters when possible, keeping it readable by human beings and trying to stay under 100 characters if possible.

Problem: Using Meta Keywords - Meta keywords should not be part of a modern SEO strategy. However, many companies still use them in their pages, believing that it will be the key to ranking well, rather than effectively optimizing the page content itself.

  • Solution: Let go of using meta keywords in your SEO strategy and instead create a strategy based around focus keywords and content optimization in order to rank for terms relevant to you and your target audience.
Severe Technical Issues

There are several severe website technical issues that may find their root in initial programming or various updates. In either case, these problems can cause severe interference with a site’s SEO and should be addressed as soon as possible.

Problem: 4xx and 5xx Errors - These errors occur when either the client has caused an error on a site (4xx) or when a server was failed to fulfill a request (5xx), but while it may be impossible to completely prevent these from occurring, a high volume will affect both SEO performance and user experience.

  • Solution: Identify your top offenders and decide if they are valid or not. In addition, identify the source of the issue, use 301 redirects to relevant content, fix bad references and leave the rest as 404.

Problem: Poorly Written Robots.txt Files - A robots.txt file acts as a website gatekeeper that decides which bots and web crawlers can enter. Poorly written files can cause crawler accessibility problems and may negatively affect site traffic.

  • Solution: Cross check web traffic with robots.txt file updates to see if issues are being created, then consider using Google Webmaster Tools’ robots.txt Tester to scan and analyze your file for issues.
Severe Content Issues

Whether the issues are present within a site’s content strategy or are the result of creators being unaware of certain issues they may cause when expanding a website, the following severe SEO content issues can interfere with a well-designed website’s optimization.

Problem: No/Non-Strategic Title Tags - Often, programmatically created pages can create ineffective page titles, which can be caused by placing the URL in the title tag.

  • Solution: Consider changing how your programmatically created pages are generated regarding title tags or create a review process for your team to prevent complications after generation.

Problem: No/Too Many/Non-Strategic H1 Tags - Every page should have one H1 title tag, but errors in programming or a lack of strategy can lead to a lack of H1 tags, too many H1s or poorly picked H1 titles, which negatively impact optimization.

  • Solution: Keep in mind that only one H1 tag should be used per page and create a strategy for headline usage that informs your programming and manual page creation.
What Tools Can Help You Solve Your Problems?

Now that you have identified your SEO issues, you will want to take effective action in order to prevent these problems from continuing to negatively affect your website. The following tools can be used for various issues.

  • Google Search Console: This tool can help you to identify crawl issues, robots.txt problems and 4xx 5xx errors. Define parameters.
  • Bing Webmaster Tools: While Google accounts for the vast majority of traffic, this tool provides the same help as Search Console, but for Bing.
  • Screaming Frog: This SEO spider tool will crawl websites and identify issues quickly so that you can implement fixes for the issues highlighted in the above content.
  • Facebook Open Graph Debugger: Make sure your content is displayed well through this tool to preview URLs in social networks.
  • ELK Stack: A log management platform, this software stack can provide multiple useful tools and can find 4xx and 5xx errors in your site.

You may need to use all of the above-mentioned tools or only some of them, depending on the problems you are experiencing. However, they may all be useful in preventing future problems with your website. By understanding what can negatively impact your site and eliminating these issues, combined with efforts to strengthen your site's overall SEO, websites can have a far stronger chance at ranking for the desired keywords.

Discover the Benefits of Strong SEO

Eliminating your SEO issues and strengthening your overall strategy will not necessarily immediately lead to massive amounts of traffic to a website. However, it will prevent the SEO efforts of your company and coworkers from being hampered. The first step is discovering these issues exist within your site, so make sure to take an in-depth look at your website’s technical and content-focused SEO efforts and begin the path to a stronger performing website today.

Reach Your Audience Through Digital Transformation

Digital transformation in the modern era is helping businesses improve their online outreach through SEO, audience targeting and much more. Learn how digital experience platforms can help you embrace the many advantages of digital transformation.

Read “Digital Experience Platforms: Designed for Digital Transformation”   Matthew Draper 2018-04-20T15:46:53Z
Categories: CMS, ECM

Building the Roparun Team Portal Part 1: Syncing civicrm participants to drupal user records

CiviCRM - Fri, 04/20/2018 - 10:12

This is a first blog post about how we build the team portal for Roparun.

Categories: CRM

Vtiger CRM integrates with Azure AD to support secure single sign-on

VTiger - Thu, 04/19/2018 - 23:49
We are excited to announce a new integration that allows users to securely access Vtiger CRM from anywhere with ease. Vtiger CRM integrates with Azure Active Directory (AD) to allow administrators to enable single sign-on (SSO) for all CRM users. Once enabled, users can login into Vtiger directly from the organizational accounts hosted in Azure […]
Categories: CRM

Organizing Successful Events: Online training on Thursday, April 26th

CiviCRM - Thu, 04/19/2018 - 17:44

Take your event management to the next level with this on-line session designed for current users of CiviCRM on Thursday, April 26th from 9 to 11 am Mountain Time. This course is an excellent follow-up to the Fundamentals of Event Management class taught by Cividesk.

Categories: CRM

Drupal core - Moderately critical - Cross Site Scripting - SA-CORE-2018-003

Drupal - Wed, 04/18/2018 - 10:34
Project: Drupal coreDate: 2018-April-18Security risk: Moderately critical 12∕25 AC:Complex/A:User/CI:Some/II:Some/E:Theoretical/TD:DefaultVulnerability: Cross Site ScriptingDescription: 

CKEditor, a third-party JavaScript library included in Drupal core, has fixed a cross-site scripting (XSS) vulnerability. The vulnerability stemmed from the fact that it was possible to execute XSS inside CKEditor when using the image2 plugin (which Drupal 8 core also uses).

We would like to thank the CKEditor team for patching the vulnerability and coordinating the fix and release process, and matching the Drupal core security window.

Solution: 
  • If you are using Drupal 8, update to Drupal 8.5.2 or Drupal 8.4.7.
  • The Drupal 7.x CKEditor contributed module is not affected if you are running CKEditor module 7.x-1.18 and using CKEditor from the CDN, since it currently uses a version of the CKEditor library that is not vulnerable.
  • If you installed CKEditor in Drupal 7 using another method (for example with the WYSIWYG module or the CKEditor module with CKEditor locally) and you’re using a version of CKEditor from 4.5.11 up to 4.9.1, update the third-party JavaScript library by downloading CKEditor 4.9.2 from CKEditor's site.
Reported By: Fixed By: 
Categories: CMS

Joomla 3.8.7 Release

Joomla! - Wed, 04/18/2018 - 08:45

Joomla 3.8.7 is now available. This is a bug fix release for the 3.x series of Joomla including over 70 bug fixes and improvements.

Categories: CMS

New Installers and IDE 3.2.0 Milestone 1 Released

Liferay - Wed, 04/18/2018 - 01:04

New Installers Released

 

Hello all,

 

We are pleased to announce a new release of Liferay Project SDK 2018.4.4 installer, Liferay Project SDK with Dev Studio Community Edition installer and Liferay Project SDK with Dev Studio DXP Installer.

 

New Installers:

 

New installers requires Eclipse Oxygen at least. For customers, they can download all of them on the customer studio download page.

 

Same as the previous 3.1 GA release, the installer is the full fledged Liferay Developer Studio installer which installs Liferay workspace, blade, Developer Studio and comes pre-bundled with latest Liferay DXP server. It also support to config a proxy using for download gradle dependencies.

 

If you want to upgrade from Studio 3.1 B1 or 3.1 GA versions, you need to add Oxygen updatesite and update to Oxygen first. Then you can upgrade through Help > Install New Software... dialog.

 

Upgrade From previous 3.1.x:

  1. Download updatesite here?
  2. Go to Help > Install New Software… > Add…
  3. Select Archive...Browse to the downloaded updatesite
  4. Click OK to close Add repository dialog
  5. Select all features to upgrade then click > Next, again click > Next and accept the license agreements
  6. Finish and restart to complete the upgrade

 

Release highlights:

  • Support Liferay Bundle 7.1
  • Bundle latest Liferay Portal

- bundle 7.1.0 Alpha in LiferayProjectSDKwithDevStudioCommunityEdition installers

- bundle DXP SP7 in LiferayProjectSDKwithDevStudioDXP installers

  • Third party plugins update

- update m2e to 1.8.2

- update bndtools to 4.0.0

- update gradle plugin buildship to 2.2.1

  • Code Update Tool

- more than 110 breaking changes for Liferay DXP/7

- improvements on auto fix

- performance improvement on finding breaking changes

  • Better Liferay Workspace Support

- update gradle workpsace version to 1.9.0

- update maven workspace

  • Liferay DXP/7 bundle support improvement

- integrate Liferay DXP SP7 support for Tomcat and Wildfly

- integrate Liferay 7 CE GA5 support for Tomcat and Wildfly

  • Better deployment support for Liferay DXP/7

- integration of Blade CLI 3.0.0

- support Plugins sdk 1.0.16

- support Liferay Workspace Maven

- support Liferay Worksapce Grade 1.9.0

  • Miscellaneous bug fixes
Feedback

If you run into any issues or have any suggestions please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck!

Yanan Yuan 2018-04-18T06:04:17Z
Categories: CMS, ECM

CivicRM Melbourne Meetup, 6PM Thurs 19th April

CiviCRM - Tue, 04/17/2018 - 03:13

Looking forward to our second meetup of 2018 on Thursday, 19th April, 6PM at Melbourne Business School.

Same format as the last meetup, a round table discussion on various topics put forward by members of the group, leading with:

  • Scheduled Reports
  • Merging Contacts
  • Deduping contacts
  • Customising Invoices
  • Webforms

Also an introduction to Gitlab as a place recommended by CiviCRM for us to share content from the meetup.

Categories: CRM

Bringing DropWizard Metrics to Liferay 7/DXP

Liferay - Mon, 04/16/2018 - 22:12
Introduction

So in any production system, there is typically a desire to capture metrics, use them to define a system health check, and then monitor the health check results from an APM tool to preemptively notify administrators of problems.

Liferay does not provide this kind of functionality, but it was functionality that I needed for a recent project.

Rather than roll my own implementation, I decided that I wanted to start from DropWizard's Metrics library and see what I could come up with.

DropWizard's Metrics library is well known for its usefulness in this space, so it is an obvious starting point.

The Metrics Library

As a quick review, the Metrics library exposes objects representing counters, gauges, meters, timers and histograms. Based upon what you want to track, one of these metric types will be used to store the runtime information.

In addition, there's also support for defining a health check which is basically a test to return a Result, basically a pass/fail, and it is intended to be combined with the metrics as a basis for the result evaluation.

For example, you might define a Gauge for available JVM memory. As a gauge, it will basically be checking the difference between the total memory and used memory. A corresponding health check might be created to test that available memory must be greater than, say, 20%. When available memory drops below 20%, the system is not healthy and an external APM tool could monitor this health check and issue notifications when this occurs. By using 20%, you are giving admins time to get in and possibly resolve the situation before things go south.

So that's the overview, but now let's talk about the code.

When I started reviewing the code, I was initially disheartened to see very little in the way of "design by interface". For me, design by interface is an indicator of how easy or hard it will be to bring the library into the OSGi container. With heavy design by interface, I can typically subclass key implementations and expose them as @Components, and consumers can just @Reference the interfaces and OSGi will take care of the wiring.

Admittedly, this kind of architecture can be considered overkill for a metrics library. The library developers likely planned for the lib to be used in java applications or even web applications, but likely never considered OSGi.

At this point, I really struggled with figuring out the best path forward. What would be the best way to bring the library into OSGi?

For example, I could create a bunch of interfaces representing the clean metrics and some interfaces representing the registries, then back all of these with concrete implementations as @Components that are shims on top of the Drop Wizard Metrics library. I soon discarded this because the shims would be too complicated casting things back and forth from interface to metrics library implementation.

I could have cloned the existing DropWizard Metrics GitHub repo and basically hacked it all up to be more "design by interface". The problem here, though, is that every update to the Metrics lib would require all of this repeated hacking up of their code to bring the updates forward. So this path was discarded.

I could have taken the Metrics library and used it as inspiration for building my own library. Except then I'd be stuck maintaining the library and re-inventing the wheel, so this path was discarded.

So I settled on a fairly light-weight solution that, I feel, is OSGi-enough without having to take over the Metrics library maintenance.

Liferay Metrics

The path I elected to take was to include and export the DropWizard Metrics library packages from my bundle and add in some Liferay-specific, OSGi-friendly metric registry access.

I knew I had to export the Metrics packages from my bundle since OSGi was not going to provide them and having separate bundles include their own copies of the Metrics jar would not allow for aggregation of the metrics details.

The Liferay-specific, OSGi-friendly registry access comes from two interfaces:

  • com.liferay.metrics.MetricRegistries - A metric registry lookup to find registries that are scoped according to common Liferay scopes.
  • com.liferay.metrics.HeallthCheckRegistries - A health check registry lookup to find registries that are scoped according to common Liferay scopes.

Along with the interfaces, there are corresponding @Component implementations that can be @Reference injected via OSGi.

Liferay Scopes

Unlike in a web application where there is typically like one scope, the application, Liferay has a bunch of common scopes used to group and aggregate details. A metrics library is only useful if it too can support scopes in a fashion similar to Liferay. Since the DropWizard Metrics library supports different metric registries, it was easy to overlay the common Liferay scopes over the registries.

The supported scopes are:

  • Portal (Global) scope - This registry would contain metrics that have no separate scope requirements.
  • Company scope - This registry would contain metrics scoped to a specific company id. For example, if you were counting logins by company, the login counter would be stored in the company registry so it can be tracked separately.
  • Group (Site) scope - This registry would contain metrics scoped to the group (or site) level.
  • Portlet scope - This registry would contain metrics scoped to a specific portlet plid.
  • Custom scope - This is a general way to define a registry by name.

Using these scopes, different modules that you create can lookup a specific metric in a specific scope without having tight coupling between your own modules.

Metrics Servlets

The DropWizard Metrics library ships with a few useful servlets, but to use them you need to be able to add them to your web application's web.xml file. In Liferay/OSGi, instead we want to leverage the OSGi HTTP Whiteboard pattern to define an @Component that gets automagically exposed as a servlet.

The Liferay Metrics bundle does just that; it exposes five of the key DropWizard servlets, but they use OSGi facilities and the Liferay-specific interfaces to provide functionality.

The following table provides details on the servlets:

Servlet Context Description CPU Profile /o/metrics/gprof Generates and returns a gprof-compatible file of profile details. Health Check /o/metrics/health-checks Runs the health checks and returns a JSON object with the results. Takes two arguments, type (for the desired scope) and key (for company or group id, plid or custom scope name). Metrics /o/metrics/metrics Returns a JSON object with the metrics for the given scope. Takes same two arguments, type and key, as described for the health checks servlet. Ping /o/metrics/ping Simple servlet that responds with the text "pong". Can be used to test that a node is responding. Thread Dump /o/metrics/thread-dump Generates a thread dump of the current JVM. Admin /o/metrics/admin A simple menu to access the above listed servlets.

The Ping servlet can be used to test if the node is responding to requests. The Metrics servlet can be used to pull all of the metrics at the designated scope and evaluated in an APM for alterting. The Health Check servlet can run health checks defined in code that perhaps needs access to server-side details to evaluate health, but they too can be invoked from an APM tool to evaluate health.

The CPU Profile and Thread Dump servlets can provide useful information to assist with profiling your portal or capturing a thread dump to, say, submit to Liferay support on a LESA ticket.

The Admin portlet, while not absolutely necessary, provides a convenient way to get to the individual servlets.

NOTE: There is no security or permission checks bound to these servlets. It is expected that you would take appropriate steps to secure their access in your environment, perhaps via firewall rules to block external access to the URLs or whatever is appropriate to your organization. Metrics Portlet

In addition, there is a really simple Liferay MVC portlet under the Metrics category, the Liferay Metrics portlet. This is a super-simple portlet which just dumps all of the information from the various registries. Can be used by an admin to view what is going on in the system, but if used it should be permissioned against casual usage from general users.

Using Liferay Metrics

Now for some of the fun stuff...

The DropWizard Metrics Getting Started page shows a simple example for measuring pending jobs in a queue:

private final Counter pendingJobs = metrics.counter(name(QueueManager.class, "pending-jobs")); public void addJob(Job job) { pendingJobs.inc(); queue.offer(job); } public Job takeJob() { pendingJobs.dec(); return queue.take(); }

Our version is going to be different than this, of course, but not all that much. Lets assume that we are going to be tracking the metrics for the pending jobs by company id. We might come up with something like:

@Component( immediate = true ) public class CompanyJobQueue { public void addJob(long companyId, Job job) { // fetch the counter Counter pendingJobs = _metricRegistries.getCompanyMetricRegistry(companyId).counter("pending-jobs"); // increment pendingJobs.inc(); // do the other stuff queue.offer(job); } public Job takeJob(long companyId) { // fetch the counter Counter pendingJobs = _metricRegistries.getCompanyMetricRegistry(companyId).counter("pending-jobs"); // decrement pendingJobs.dec(); // do the other stuff return queue.take(); } @Reference(unbind = "-") protected void setMetricRegistries(final MetricRegistries metricRegistries) { _metricRegistries = metricRegistries; } private MetricRegistries _metricRegistries; }

The keys here are that the MetricRegistries is injected by OSGi, and that class is used to locate a specific instance of the DropWizard Metrics registry instance where the metrics can be retrieved or created. Since they can be easily looked up, there is no reason to hold a reference to the metric indefinitely.

In the liferay-metrics repo, there are some additional examples that demonstrate how to leverage the library from other Liferay OSGi code.

Conclusion

So I think that kind of covers it. I've pulled in the DropWizard Metrics library as-is, I've exposed it into the OSGi container so other modules can leverage the metrics, I've provided an OSGi-friendly way to inject registry locators based on common Liferay scopes. There's the exposed servlets which provide APM access to metrics details and a portlet to see what is going on using a regular Liferay page.

The repo is available from https://github.com/dnebing/liferay-metrics, so feel free to use and enjoy.

Oh, and if you have some additional examples or cool implementation details, please feel free to send me a PR. Perhaps the community can grow this out into something everyone can use...

David H Nebinger 2018-04-17T03:12:36Z
Categories: CMS, ECM

Upcoming GDPR-focused features for Liferay DXP

Liferay - Mon, 04/16/2018 - 16:01

May 25 is fast approaching. Every business impacted by GDPR should be well underway in preparing for the changes to data processing set forth by the regulation. To address the heightened requirements for empowering users' control of their personal data, Liferay has been evaluating and building features into Liferay DXP to aid our customers in their journey toward compliance. I wanted to share what customers can expect in the upcoming release of Liferay Digital Enterprise 7.1 this summer (with an update to DE 7.0 scheduled thereafter with the same features).

But First... Before jumping into the details of what Liferay is building, allow me to reiterate something I've been stressing in our papers, blogs, and talks: GDPR compliance cannot be achieved by simply checking off a list of technical requirements. True compliance requires businesses to holistically adopt both organizational and technical practices of greater protection for their users' personal data. This may include training personnel, auditing all stored user data, establishing data breach response strategies, appointing a data protection officer, redesigning websites to obtain consent for targeted marketing, responding to users' right to be forgotten, etc. Beware of vendors that supposedly provide turnkey solutions for GDPR compliance, regardless of what they promise (and how much they cost). No such solution exists.   In regards to the technical measures GDPR stipulates, the heart of the regulation is encapsulated by the requirement of data protection by design and by default. As businesses select Liferay DXP to build their digital transformation solution, the responsibility falls on the business to design their solution in a way that satisfies this concept of “data protection by design and by default.”   Though no software product can truthfully claim to be “GDPR compliant,” the platform and tools provided by the product can greatly accelerate or hinder a business’s journey toward compliance. Out of the box, Liferay DXP already provides rich capabilities for designing and managing privacy-centric solutions (some of which are described in our Data Protection for Liferay Services and Software whitepaper), but there's much more we can provide to help our customers.   After wrestling with the couple hundred pages of regulation, we decided to first focus on the concrete requirements that are most painful for customers to implement themselves. Specifically, we evaluated GDPR's data subject rights and identified the right to be forgotten and right to data portability to be the most challenging to tackle given Liferay DXP’s current feature set. Google trends also affirms these two are of greatest interest (and likely anxiety) among users.   So here's what Liferay's engineering team has been working on:   Right To Be Forgotten The right to be forgotten (technically known as the “right to erasure”) requires organizations to delete an individual’s personal data upon his/her request (excluding data the organization has a legitimate reason to retain like financial records, public interest data, etc.). Personal data is considered erased when the data can no longer be reasonably linked to an identifiable individual and thus no longer subject to GDPR. This can be accomplished by simply deleting or carefully anonymizing the personal data. Proper anonymization is difficult and tedious but may be the preferred option depending on the business’s use case. For example, Liferay want to keep the technical content on our community forums, but we must sanitize the posts and scrub personal data if a user invokes his right to be forgotten.   Our engineering team is adding a tool to the user management section to review a user's personal data stored on Liferay. The UI will present the user's personal data per application (Blogs, Message Boards, Announcements, third-party apps, etc.). Administrators can either delete the data or edit the content in preparation for anonymization. For example, if a community member writes a blog post containing useful technical information (for example: DXP upgrade tips) but also started the blog with an anecdotal story that contains personal information (for example: “My daughter Alyssa once told me …”), an administrator may want to remove the personal story. After satisfactorily editing the content, the data erasure tool can automatically scrub data fields like userName and userId. The tool will also automatically scrub these data fields from system data tables like Layout and BackgroundTask.   Accompanying the UI is a programmatic interface to mark data fields potentially containing personal data. Any third-party application can implement these interfaces to surface personal data through the UI.  The interface also allows custom logic to anonymize or delete personal data. For example, instead of deleting a user's entire postal address, customers may want to keep just the zip code for analytics purposes.     Right To Data Portability The right to data portability requires organizations to provide a machine-readable export of a user's personal data upon request. The regulation's goal is to prevent vendor lock-in where users find the cost of switching service providers is too burdensome. In theory, this right empowers individuals to migrate their data from their current mortgage provider to a competitor, for example. The regulation even stipulates that organizations should transfer a user's personal data directly to another organization where “technically feasible,” though this likely won't be a reality in the near future.   Alongside our data erasure tool, our engineering team is building a tool to export a user's personal data. This will behave similar to Liferay's import/export pages feature except the focus will be on exporting personal data rather than page data. The administrator UI will list a user's personal data per application and asynchronously export the data.     Down The Road This is only the beginning of privacy-focused features we plan to bake into our platform. Though the roadmap for 7.2 is still up in the air, we're evaluating ideas like changes to service builder's data schema to potentially aid pseudonymization (separating personal data from identifiable individuals via some key). We've considered building a privacy dashboard for end users to visualize and control their own personal data. We've also thought about baking in a consent manager so businesses can better comply with the strengthened consent requirements.   Privacy is a justifiably growing concern that ultimately reaches beyond the territorial scope of GDPR. The May 25 deadline is forcing organizations to evaluate and implement the ethical impact of data collection in this brave new digital world. Currently much of that conversation stems from FUD leading to rubbish misinformation. But the dust will settle in the coming months and years. Organizations caught unprepared will potentially face costly penalties. Better and best privacy practices will eventually emerge and become standard practice, not unlike standard InfoSec practices that have developed over the last couple decades. Throughout that process, Liferay will continuously evaluate what our platform and services can provide to aid our customers in their journey toward thoughtfully guarding their users' data.   If you'd like to better understand how your organization can prepare for GDPR, check out our webinar: GDPR: Important Principles & Liferay DXP.     Dennis Ju 2018-04-16T21:01:59Z
Categories: CMS, ECM

Dries Buytaert Shares His View on Decoupled Drupal: When, Why, and How

Drupal - Mon, 04/16/2018 - 13:04

The following blog was written by Drupal Association Signature Hosting Supporter, Acquia

More and more developers are choosing content-as-a-service solutions known as decoupled CMSes, and due to this trend, people are asking whether decoupled CMSes are challenging the market for traditional CMSes.

By nature, decoupled CMSes lack end-user front ends, provide few to no editorial tools for display and layout, and as such leave presentational concerns almost entirely up to the front-end developer. Luckily, Drupal has one crucial advantage that propels it beyond these concerns of emerging decoupled competitors.

Join Dries Buytaert, founder of Drupal and CTO at Acquia, as he shares his knowledge on how Drupal has an advantage over competitors, and discusses his point-of-view on why, when, and how you should implement decoupled Drupal.

Dries will touch on:

  • His thoughts on decoupled CMSes - where is the CMS market headed and when?
  • His opinion on whether decoupled CMSes will replace traditional CMSes
  • The advantages of decoupled Drupal vs. emerging decoupled competitors
  • Considerations when determining if decoupled Drupal is right for your project
Click here to watch the webinar. Dries Buytaert

CHAIRMAN, CHIEF TECHNOLOGY OFFICERACQUIA, INC.

Dries Buytaert is an open source developer and technology executive. He is the original creator and project lead for Drupal, an open source platform for building websites and digital experiences. Buytaert is also co-founder and chief technology officer of Acquia, a venture-backed technology company. Acquia provides an open cloud platform to many large organizations, which helps them build, deliver and optimize digital experiences. A Young Global Leader at the World Economic Forum, he holds a PhD in computer science and engineering from Ghent University and a Licentiate Computer Science (MsC) from the University of Antwerp. He was named CTO of the Year by the Massachusetts Technology Leadership Council, New England Entrepreneur of the Year by Ernst & Young, and a Young Innovator by MIT Technology Review. He blogs frequently on Drupalopen sourcestartupsbusiness, and the future at dri.es.

LinkedIn

Twitter

https://www.acquia.com/resources/webinars/dries-buytaert-shares-his-view-decoupled-drupal-when-why-and-how?cid=7010c000002ZXzYAAW&ct=online-advertising&ls=drupalpremiumbenefits-dries&lls=pro_ww_drupalassociationpremiumbenefits_2018

Categories: CMS

Why Great B2B Customer Experiences are More Important Than Ever

Liferay - Mon, 04/16/2018 - 10:31

Modern and personalized customer experiences that rely on cutting-edge technology have played a major role in the business to consumer (B2C) market for many years. However, the business to business (B2B) market is beginning to rely on great user experiences more than ever, with many companies adopting user interfaces, such as portals, that reflect the personalized and fast experiences most often seen in the B2C market.

These great B2B user experiences are continuing to grow in importance for companies as more and more processes move online. According to Forrester, B2B eCommerce will account for 13.1% of all B2B sales in the United States by the year 2021, indicating a steady increase for the foreseeable future when compared to the 11% share of B2B eCommerce seen in 2017.

With B2B digital experiences continuing to play an increasingly crucial role in the long-term success of companies, it is important that businesses work to improve and refine their online presence. But the question remains, what makes a great B2B user experience?

The Influence of B2C on B2B User Experience

According to McKinsey research, B2B customer experience index ratings rank far lower than their B2C counterparts, with the average B2B company scoring below 50 percent compared to the typical 65 to 85 percent scored for B2C companies. This indicates that the majority of B2B customer experience audience members are dissatisfied with their online interactions with companies in the industry.

While there is a difference between the audiences and goals of B2B and B2C companies, the modern customer does not necessarily distinguish between the two in their minds. B2B customers interact with B2C experiences every day, such as shopping on Amazon for their own personal needs. These B2C companies are continually providing the latest in digital experiences in an effort to compete with one another in ways that may not be seen as often in the B2B realm. The result is that consistently rising customer expectations regarding B2C experiences are migrating to the B2B sphere.

Today’s B2B audience has grown to expect well-designed user interfaces that remember their interests, provide services and products that predict needs based on past purchases and more features that make the journey quick and easy to navigate.

What is Holding Back Your B2B User Experience?

As discussed by Customer Think, only 17% of B2B companies have fully integrated customer data throughout the organization, which means that the decisions being made by these businesses are often based on flawed or incomplete data insights. Should a company be unable to access customer insights from all departments, such as customer service or social media, they may miss out on specific aspects of the experience that highly influence the overall quality of business interactions, as well as data that can provide a more accurate view of each audience member.

Beyond gathering data to enhance experiences, businesses may not have the capabilities needed to completely control and execute their customer experience strategy. Research by Accenture found that only 21% of B2B companies have total control over their sales partners, who are largely responsible for delivering CX to their audience. If a business is unable to determine how, when and to whom these experiences are provided, even a well-constructed B2B user interface can result in an unsuccessful experience.

Back-end integration that allows greater and more accurate access to customer data, modern interfaces that allow for personalization based on individual needs and improved delivery systems governing how these interfaces are provided to audience members can greatly enhance a company’s modern B2B user experience.

How Can a Great Customer Experience Impact Your B2B Relationships?

Great customer experience strategies work to create an environment that is free of friction and provides users with a journey that meets their every need as quickly and easily as possible. While B2B audiences may not be as likely to abandon a shopping experience or choose a competitor due to poor experiences as B2C audiences, the impact of experiences on long-term relationships is steadily increasing.

According to research regarding B2C and B2B experiences by The Tempkin Group, 86 percent of those who receive a great customer experience are likely to return for another purchase. However, the study also found that only 13 percent of people who had a sub-par customer experience will return. In addition, engaged and satisfied customers will buy 50% more frequently and spend 200% more annually, as found by Rosetta.

The importance of creating great B2B experiences is not just in keeping up with competitors and audiences, it also has a positive impact on company performance. As shown by McKinsey, B2B companies that transformed their customer experience processes saw benefits similar to those seen by B2C companies, including a 10 to 15 percent revenue growth, higher client satisfaction scores, improved employee satisfaction and a 10 to 20 percent reduction in operational costs.

The combination of these benefits means a higher ROI on B2B operations, supporting the company as a whole.

Create an Effective B2B Customer Experience

A well-crafted customer experience will help to meet your audience needs and encourage long-term client relationships, and it all begins with an effective strategy. Learn more about what strategy is right for you with our whitepaper insights.

Read “Four Strategies to Transform Your Customer Experience”   Matthew Draper 2018-04-16T15:31:44Z
Categories: CMS, ECM

PrestaShop for a large online store. Is it a good idea?

PrestaShop - Mon, 04/16/2018 - 09:36
  Can the PrestaShop eCommerce platform meet the needs of large companies and online stores? We have checked! See the conclusions and specific completed projects. 
Categories: E-commerce

Liferay 7/DXP: Making Logging Changes Persistent

Liferay - Mon, 04/16/2018 - 09:26
Introduction

I have never liked one aspect of Liferay logging - it is not persistent.

For example, I can't debug a startup issue unless I get the portal-log4j-ext.xml file set up and out there.

Not so much of a big deal as a developer, but as a portal admin if I use the control panel to change logging levels, I don't expect them to be lost just because the node restarts.

Solution

So about a year ago, I created the log-persist project.

The intention of this project is to persist logging changes.

The project itself contains 3 modules:

  • A ServiceBuilder API jar to define the interface over the LoggingConfig entity.
  • A ServiceBuilder implementation jar for the LoggingConfig entity.
  • A bundle that contains a Portlet ActionFilter implementation to intercept incoming ActionRequests for the Server Administration portlet w/ the logging config panel.

The ServiceBuilder aspect is pretty darn simple, there is only a single entity defined, the LoggingConfig entity which represents a logging configuration.

The action is in the ActionFilter component. This component wires itself to the Server Administration portlet. All incoming ActionRequests (meaning all actions a user performs on the Server Administration portlet) will be intercepted by the filter. The filter passes the ActionRequest on to the real portlet code, but upon return from the portlet code, the filter will check the command to see if it was the "addLogLevel" or "updateLogLevels" commands, the ones used in the portlet to change log levels. For those commands, the filter will extract the form values and pass them to the ServiceBuilder layer to persist.

Additionally the filter has an @Activate method that will be invoked when the component is started. In this method, the code pulls all of the LoggingConfig entities and will re-apply them to the Liferay logging configuration.

All you need to do is build the 3 modules and drop them into your Liferay deploy folder, they'll take care of the rest.

Conclusion

So that's it. I should note that the last module is not really necessary. I mean, it only contains a single component, the ActionFilter implementation, and there's no reason that it has to be in its own module. It could certainly be merged into the API module or the service implementation module.

But it works. The logging persists across restarts and, as an added bonus, will apply the logging config changes across the cluster during startup.

It may not be a perfect implementation, but it will get the job done.

You can find it in my git repo: https://github.com/dnebing/log-persist

David H Nebinger 2018-04-16T14:26:22Z
Categories: CMS, ECM
Syndicate content