Drupal's commitment to accessibility
This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post. Last week, WordPress Tavern picked up my blog post about Drupal 8's upcoming Layout Builder. While I'm grateful that WordPress Tavern covered Drupal's Layout Builder, it is not surprising that the majority of WordPress Tavern's blog post alludes to the potential challenges with accessibility. After all, Gutenberg's lack of accessibility has been a big topic of debate, and a point of frustration in the WordPress community. I understand why organizations might be tempted to de-prioritize accessibility. Making a complex web application accessible can be a lot of work, and the pressure to ship early can be high. In the past, I've been tempted to skip accessibility features myself. I believed that because accessibility features benefited a small group of people only, they could come in a follow-up release. Today, I've come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone. As you can see in Drupal's Values and Principles, we are committed to building software that everyone can use. Accessibility should always be a priority. Making capabilities like the Layout Builder accessible is core to Drupal's DNA. Drupal's Values and Principles translate into our development process, as what we call an accessibility gate, where we set a clearly defined "must-have bar." Prioritizing accessibility also means that we commit to trying to iteratively improve accessibility beyond that minimum over time. Together with the accessibility maintainers, we jointly agreed that:
- Our first priority is WCAG 2.0 AA conformance. This means that in order to be released as a stable system, the Layout Builder must reach Level AA conformance with WCAG. Without WCAG 2.0 AA conformance, we won't release a stable version of Layout Builder.
- Our next priority is WCAG 2.1 AA conformance. We're thrilled at the greater inclusion provided by these new guidelines, and will strive to achieve as much of it as we can before release. Because these guidelines are still new (formally approved in June 2018), we won't hold up releasing the stable version of Layout Builder on them, but are committed to implementing them as quickly as we're able to, even if some of the items are after initial release.
- While WCAG AAA conformance is not something currently being pursued, there are aspects of AAA that we are discussing adopting in the future. For example, the new 2.1 AAA "Animations from Interactions", which can be framed as an achievable design constraint: anywhere an animation is used, we must ensure designs are understandable/operable for those who cannot or choose not to use animations.
Unconference 2018
//La versión española del artículo se puede encontrar aquí: Unconference 2018. Unconference took place on November 6, in Pakhuis de Zwijger, previously at Liferay DevCon in Amsterdam. I've read about this sessions, but I've never taken part in one. Spoiler, I loved it If you have ever been taking part of one, you'll know Unconference agenda doesn't exist before it starts. If you haven't, I'm going to talk about Liferay Unconference 2018 to explain how it was. First of all, Olaf talked about different spaces (talks zones, lunch zone, toilets, etc.), lunchtime, organization, and, very important, about 4 principles an 1 law. 4 principles:
- Whoever comes is the right people (every participant wants to be there)
- Whatever happens is the only thing that could have (the important thing is what it's happening in that place and in that moment )
- Whenever it starts is the right time (you don't have to wait for anything to start the session)
- When it’s (not) ever it’s (not) over (you make the most of the session)
- Law of Two Feet: If at any time you are in a situation where you are neither learning nor contributing, then you can use your two feet to go to a more productive place. This law is important to understand that every participant is present voluntarily.
- Migration experiences : LR 6.2 --> 7.0
- If you want to upgrade Liferay version and to migrate to open source base data, it's better to migrate base data version first.
- Config & content orchestration - site initializers
- There are some implementations, one is created by Commerce team.
- Liferay team is analyzing to include it to new versions.
- Liferay + TensorFlow
- It is a new functionality in 7.2. Now it's available in github.
- This functionality is implemented for images but is available to assets.
- How do you monitor your Liferay application & plugins
- Use jvm to check threads
- Java thread dump analyzer -> http://fastthread.io
- Documentation: What is missing?
- Slack vs forum – don't lose information
- DXP vs. CE
- We talked about the two versions (CE and DXP) and possibility we pay licenses for support and code of both versions are the same
- Workspace: tips & tricks | How you use/extend workspace? Use
plugins?
- In the upgrade process to 7 version, if you can't migrate all modules, they recommend migrating services.
- compile only - instruction call to proof
- Philosophy/Mentality
- Attitude (everyone wants to share, teach and learn)
- You can propose topics easily
- You have a lot of options to learn (too many experts).
Unconference 2018
//The english version of this article can be found here: Unconference 2018. El 6 de Noviembre, día anterior a comenzar la DevCon 2018, tuvo lugar la Unconference de Liferay en Pakhuis de Zwijger. Había leído sobre este tipo de sesiones pero nunca había participado en ninguna. Spoiler, me encantó. Si ya has participado en alguna, sabrás que en estas sesiones no se tiene una agenda previamente definida, si no que se define en la propia sesión y por los asistentes. Por si no, explicaré un poco cómo fue y así sabrás su funcionamiento. Primero, Olaf nos hizo una introducción en la que nos habló de la distribución de las zonas, comida, organización y, por supuesto, los 4 principios y la ley. Los 4 principios:
- Cualquier persona que esté en la reunión es la persona adecuada (ya que está mostrando interés al asistir).
- Cualquier cosa que esté sucediendo es la única cosa que podremos llegar a tener (hay que pensar en lo que está ocurriendo en ese momento y en ese lugar).
- Sea cual sea el momento en que comience, es el momento correcto (hace referencia al comienzo de las sesiones, se pueden comenzar sin esperar a nada en concreto, como que estén todos, etc.).
- Cuando se acabó, se acabó (También se suele poner con la forma negativa, Hasta que no se ha acabado, no ha terminado; así se indica que se aproveche el tiempo lo máximo posible).
- Ley de los dos pies: Si no aportas o no te aporta, puedes irte (si en algún momento sientes que no estás aprendiendo o contribuyendo en nada, usa tus dos pies, es decir, puedes irte a otra reunión. La idea es que nadie debería estar en una reunión que considera aburrida).
- Experiencia en la migración : LR 6.2 --> 7.0
- Para migrar la base de datos a una open source, es mejor hacer la migración primero de la base de datos y después la versión de Liferay.
- Orquestación de config & conten– inicializadores de
sitios
- Hay varias propuestas, una de ellas creada por el equipo de Commerce
- Se está analizando incluirlo en las nuevas versiones.
- Liferay + TensorFlow
- Estará disponible en la versión 7.2, aunque se puede descargar ya desde github
- Estará a nivel de asset, e implementado para imágenes.
- Cómo monitorizar tus aplicaciones y plugines de Liferay.
- Ver los thread en jvm
- Para analizar los dump de threads, muy útil -> http://fastthread.io
- Documentation: What is missing?
- Slack vs fórum – cómo gestionar las preguntas y para no perder información.
- DXP vs. CE
- Se habló, entre otro temas, sobre la gestión de las dos versiones y opciones, como dejar el código de ambas versiones fuese igual y pagar la licencia por ejemplo, por el soporte.
- Workspace: tips & tricks | How you use/extend workspace? Use
plugins?
- Se recomienda, al pasar a la 7, ppr lo menos pasar los servicios a módul
- compile only - instruction call to proof
- Filosofía/Mentalidad (la idea en la que se basa este tipo de conferencias)
- Actitud (todo el mundo va a aportar, compartir y aprender)
- Puedes proponer temas que te interesen
- Facilidad para compartir y aprender (incluso con expertos en el área).
New learning material on Liferay University
Have you checked out Liferay University by now? Or even better: Liferay Passport - the all-inclusive version of University? If you did, you might want to come back and check the new content. If you haven't... Why? Since my last update, we've added two more free lessons and one full course
- Increase completion rates for your Forms with adaptive rules
- Getting started with Liferay Help Center (the new Enterprise Support Tool)
- Upgrading Liferay (from a Liferay Portal 6.2 EE installation to Liferay DXP 7.1)
Just published extension Contact Specific API Defaults
I have just published the CiviCRM native extension Contact Specific API Defaults which allows you to specify specific defaults for API fields for a contact.
The extension was funded by Mediwe and developed by CiviCooP based on this use case:
Membership Management online training: Wed. December 5th, 12 pm MT
Cividesk is offering the Fundamentals of Membership Management training session for new users on Wednesday, December 5th starting at 11 am PT/ 12 pm MT/ 2 pm ET. This training class will help you get started using the membership module by learning how configure membership types, create an online membership sign-up page, track current and expired members, create a membership report and much more.
Getting Started with Talend Open Studio: Run and Debug Your Jobs
In the past blogs, we have learned how to install Talend Open Studio, how to build a basic job loading data into Snowflake, and how to use a tMap component to build more complex jobs. In this blog, we will enable you with some helpful debugging techniques and provide additional resources that you can leverage as you continue to learn more about Talend Open Studio. As with our past blogs, you are welcome to follow along in our on-demand webinar. This blog corresponds with the last video of the webinar. In this tutorial, we will quickly address how to successfully debug your Talend Jobs, should you run into errors. Talend classifies errors into two main categories: Compile errors and Runtime errors. A Compile error prevents your Java code from compiling properly (this usually includes syntax errors or Java class errors). A Runtime error prevents your job from completing successfully, resulting in the job failing during execution. In the previous blog, we designed a Talend job to generate a Sales report to get data into a Snowflake cloud data warehouse environment. For the purposes of this blog, we have altered that job, so when we try to run it, we will see both types of errors. In this way, we can illustrate how to resolve both types of errors. Resolving Compile Errors in Talend Open Studio Let’s look at a compile error. When we execute this job in Talend Studio, it will first attempt to compile, however, the compile will fail with the below error. You can review java error details within the log, which states that the “quantity cannot be resolved or is not a field”. Conveniently, it also highlights the component the error is most closely associated to. To locate the specific source of the problem within the tMap component, you can either dive into the tMap and search yourself OR you can switch to the Code view. Although you cannot directly edit the code here, you can select the red box highlighted on the right of the scroll bar to bring you straight to the source of the issue. In this case, the arithmetic operator is missing from the Unit Price and Quantity equation. Next, head into the tMap component and make the correction to the Unit Price and Quantity equation by adding a multiplication operator (*) between Transactions.Unit_Price and Transactions.qty. Click Ok and now run the job again. And now you see the compile error has been resolved. Resolving Runtime Errors in Talend Open Studio Next, the job attempts to send the data out to Snowflake. And a runtime error occurs. You can read the log and it says, “JDBC driver not able to connect to Snowflake” and “Incorrect username or password was specified”. To address this issue, we’ll head to the Snowflake component and review the credentials. It looks like the Snowflake password was incorrect, so re-enter the Snowflake password, and click run again to see if that resolved the issue. And it did! This job has been successfully debugged and the customer data has been published to the Snowflake database. Conclusion This was the last of our planned blogs on getting started with Talend Open Studio, but there are other resources that you can access to improve your skills with Talend Open Studio. Here are some videos that we recommend you look at to strengthen and add on to the skills that you have gained from these past four blogs: Joining Two Data Sources with the tMap Component – This tutorial will give you some extra practice using tMap to join your data complete with downloadable dummy data and PDF instructions. Adding Condition-Based Filters Using the tMap Component – tMap is an incredibly powerful and versatile component with many uses, and in this tutorial, you will learn how to use tMap and its expression builder to filter data based on certain criteria. Using Context Variables – Learn how to use context variables, which allow you to run the same job in different environments. For immediate questions, be sure to visit our Community, and feel free to let us know what types of tutorials would be helpful for you. The post Getting Started with Talend Open Studio: Run and Debug Your Jobs appeared first on Talend Real-Time Open Source Data Integration Software.
Bitcoin and Blockchain: are you reaping the benefits for your business?
Accelerate the Move to Cloud Analytics with Talend, Snowflake and Cognizant
In the last few years, we’ve seen the concept of the “Cloud Data Lake” has gained more traction in the enterprise. When done right, a data lake can provide the agility for Digital Transformation around customer experience by enabling access to historical and real-time data for analytics.
However, while the data lake is now a widely accepted concept both on-premises and in the cloud, organizations still have trouble making them usable and filling them with clean, reliable data. In fact, Gartner has predicted that through 2018, 90% of deployed data lakes will be useless. This is largely due to the diverse and complex combinations of data sources and data models that are popping up more than ever before.
Migrating enterprise analytics on-premises to the cloud requires significant effort before delivering value. Cognizant just accelerated your time to value with a new Data Lake Quickstart solution. In this blog, I want to show you how you can run analytics migration projects to the cloud significantly faster, deliver in weeks instead of months, with lower risk using this new Quickstart. Cognizant Data Lake Quickstart with Talend on Snowflake First, let’s start by going into detail on what this Quickstart solution is comprised of. The Cognizant Data Lake Quickstart Solution includes:
- A data lake reference architecture based on:
- Snowflake, the data warehouse built for the cloud
- Talend Cloud platform
- Amazon S3 and Amazon RDS
- Data migration from on-premises data warehouses (Teradata/Exadata/Netezza) to Snowflake using metadata migration
- Pre-built jobs for data ingestion and processing (pushdown to Snowflake and EMR)
- Uses Talend to extract data files from on-premises (structured/semi-structured) and ingest into Amazon S3 using a metadata-based approach to store data quality rules and target layout
- Stores data on Amazon S3 as an enterprise data lake for processing
- Leverages the Talend Snowflake data loader to move files to Snowflake from Amazon S3
- Runs Talend jobs on execution connecting to Snowflake and process data
- Cost optimization – Up to 50% reduction in initial setup effort to migrate to Snowflake
- Simplification – Template based approach to facilitate Infrastructure setup and Talend jobs
- Faster time to market – Deliver in weeks instead of months.
- Agility: Any changes to migration mainly consist of changes only to metadata without any code change. Self-service mechanism to onboard new sources, configurations, environments, etc. just by providing metadata with minimal Talend technical expertise. It’s also easy to maintain as all data migration configurations are maintained in a single metadata repository.
What the Healthcare Industry Can Teach Companies About Their Data Strategy
The information revolution – which holds the promise of a supercharged economy through the use of advanced analytics, data management technologies, the cloud, and knowledge – is affecting every industry. Digital transformation requires major IT modernization and the ability to shorten time data to insights to make the right business decisions. For companies, it means being able to efficiently process and analyze data from a variety of sources at scale. All this in the hope to streamline operations, enhance customer relationship, and provide new and improved products and services.
The healthcare and pharmaceutical industries are the perfect embodiment of what is at stake with the data revolution. Opportunities lie at all the steps of the health care value chain for those who succeed in their digital transformation:
- Prevention: Predicting patients at risk for disease or readmission.
- Diagnosis: Accurately diagnosing patient conditions, matching treatments with outcomes.
- Treatment: Providing optimal and personalized health care through the meaningful use of health information.
- Recovery and reimbursement: Reducing healthcare costs, fraud and avoidable healthcare system overuse. Providing support for reformed value-based incentives for cost effective patient care, effective use of Electronic Health Records (EHR), and other patient information.
Being able to unlock the relevance of healthcare data is the key to having a 360-view of the patient and, ultimately, delivering better care.
Data challenges in the age of connected careBut that’s simpler said than done. The healthcare industry faces the same challenge as others in that often business insights are missed due to speed of change and the complexity of mounting data users and needs. Healthcare organizations have to deal with massive amounts of data housed in a variety of data silos such as information from insurance companies, patient records from multiple physicians and hospitals. To access this data and quickly analyze healthcare information, it is critical to break down the data silos.
Healthcare organizations are increasingly moving their data warehouse to a cloud-based solution and creating a single, unified platform for modern data integration and management across cloud and on-premise environments. Cloud-based integration solutions provide broad and robust connectivity, data quality, and governance tracking, simple pricing, data security and big data analysis capabilities. Decision Resources Group (DRG) finds success in the cloud Decision Resources Group (DRG) is a good example of the transformative power of the cloud for healthcare companies. DRG provides healthcare analytics, data and insight products and services to the world’s leading pharma, biotech and medical technology companies. To extend its competitive edge, DRG made the choice to build a cloud data warehouse to support the creation of its new Real-World Data Platform, a comprehensive claim and electronic health record repository that covers more than 90% of the US healthcare system. With this platform, DRG is tracking the patient journey, identifying influencers in healthcare decision making and segmenting data so that their customers have access to relevant timely data for decision making. DRG determined that their IT infrastructure could not scale to handle the petabytes of data that needed to be processed and analyzed. They looked for solutions that contained a platform with a SQL engine that works with big data and could run on Amazon Web Services (AWS) in the cloud. DRG selected data integration provider Talend and the Snowflake cloud data warehouse as the foundation of its new Real-World Data Platform. With an integration with Spark for advanced machine learning and Tableau for analysis, DRG gets scalable compute performance without complications allowing their developers to build data integration workflows without much coding involved. DRG now has the necessary infrastructure to accommodate and sustain massive growth in data assets and user groups over time and is able to perform big data analytics at the speed of cloud. This is the real competitive edge. The right partner for IT modernization When it came to its enterprise information overhaul, DRG is not the only healthcare company that made the choice to modernize in the cloud. AstraZeneca, the world’s seventh-largest pharmaceutical company, chose to build a cloud data lake with Talend and AWS for its digital transformation. This architecture enables them to scale up and scale down based on business needs. Healthcare and pharmaceutical companies are at the forefront of a major transformation across all industries requiring the use of advanced analytics and big data technologies such as AI and machine learning to process and analyze data to provide insights into the data. This digital transformation requires IT modernization, using hybrid or multi-cloud environments and providing a way to easily combine and analyze data from various sources and formats. Talend is the right partner for these healthcare companies, but also for any other company going through digital transformation. Additional Resources: Read more about DRG case study https://www.talend.com/customers/drg-decision-resources-group/ Read more about AstraZeneca case study https://www.talend.com/customers/astrazeneca/ Talend cloud https://www.talend.com/products/integration-cloud/ The post What the Healthcare Industry Can Teach Companies About Their Data Strategy appeared first on Talend Real-Time Open Source Data Integration Software.
Meet Aleksandr, Ambassador of the month | November 2018
Three reasons to move your on-premises data architecture to the cloud
Most companies only use 5 to 10 percent of the data they collect. So estimates Beatriz Sanz Sai, a 20-year veteran in advanced analytics and the head of Ernst & Young’s global data and analytics practice. While it’s impossible to validate such a claim, the fact is many organizations are gathering lots of data but[...] Read the full article here. The post Three reasons to move your on-premises data architecture to the cloud appeared first on SnapLogic.
Liferay Security Announcement: TLS v1.0
The vulnerabilities in TLS 1.0 (and SSL protocols) include POODLE and DROWN. Due to these security risks, Liferay decided to disable TLS 1.0, as many other companies have done. Moving to TLS 1.1 and higher will allow users to keep communications between Liferay and Liferay.com secure. What TLS version Liferay systems are going to support We will support TLS 1.1 and above. Affected Liferay Services and Websites Liferay Portal CE and Liferay DXP Functionality
- Marketplace
- Licensing (via order id, EE only)
- api.liferay.com
- cdn.lfrs.sl
- community.liferay.com
- customer.liferay.com
- demo.liferay.com
- dev.liferay.com
- downloads.liferay.com
- forms.liferay.com
- learn.liferay.com
- liferay.com
- liferay.com.br
- liferay.com.cn
- liferay.de
- liferay.es
- liferay.org
- marketplace.liferay.com
- mp.liferay.com
- origin.lfrs.sl
- partner.liferay.com
- services.liferay.com
- support.liferay.com
- translate.liferay.com
- www.liferay.com
- releases.liferay.com (tentative)
- repository.liferay.com (tentative)
- On Java 8, the default client-side TLS version is TLS 1.2 (TLS 1.1 is also supported and enabled). Java 8 also introduced a new system property called jdk.tls.client.protocols to configure which protocols are enabled.
-
On Java 7, the default client-side TLS version
is TLS 1.0, but TLS 1.1 and 1.2 are also
supported, though must be enabled manually. As of Java
7u111, TLS 1.2 is also enabled by default, though this update
is available for Oracle Subscribers only.
- The system property, jdk.tls.client.protocols, is available as of Java 7u95 (for Oracle Subscribers only).
- On Java 6, the default and only client-side TLS version is TLS 1.0. As of Java 6u111, TLS 1.1 is also supported, though this update is available for Oracle Subscribers only.
- There is another Java system property available called https.protocols, which controls the protocol version used by Java clients in certain cases (see details on Oracle's blog: Diagnosing TLS, SSL, and HTTPS).
- Oracle Documentation: JDK 8 Security Enhancements
- Oracle Documentation: Java SE 7 Security Enhancements
- Oracle Blog: JDK 8 will use TLS 1.2 as default
- Oracle Blog: Diagnosing TLS, SSL, and HTTPS
- JDK Bug System: JDK-7093640 Enable client-side TLS 1.2 by default
- Oracle Documentation: Java SE Development Kit 7, Update 95 (JDK 7u95)
- IBM Support: How do I change the default SSL protocol my Java Client Application will use?
Talend and Red Hat OpenShift Integration: A Primer
One of the aspects I am always fascinated about Talend is its ability to run programs according to multiple job execution methodologies. Today I wanted to write an overview of a new way of executing data integration jobs using Talend and Red Hat OpenShift Platform. First and foremost, let us do a quick recap of the standard ways of running Talend jobs. Users usually run Talend jobs using Talend schedulers which can be either in the Cloud or On-premise. Other methods include creating standalone jobs, building web services from Talend Jobs, building OSGI Bundle for ESB and the latest entry to this list from Talend 7.1 onwards is building the job as Docker image. For this blog, we are going to focus on the Docker route and show you how Talend Data Integration jobs can be used with Red Hat OpenShift Platform. I would also highly recommend reading two other interesting Talend blogs related to the interaction between Talend and Docker, which are:
- Going Serverless with Talend through CI/CD and Containers by Thibaut Gourdel
- Overview: Talend Server Applications with Docker by Michaël Gainhao
- Red Hat OpenShift Online Pro
- Red Hat OpenShift Dedicated
- Red Hat OpenShift Container Platform
Event Management online training - Friday, November 30th
Learn the basics of customizing CiviEvent for your organization, the steps to create an online event, and how to manage and track event participants during this 2-hour online training session taught by Cividesk.
Three ways API management transforms your organization
In my previous blog post, “Future-proof your API lifecycle strategy,” I took a pretty nuts-and-bolts approach in explaining why companies are rethinking their application programming interface (API) lifecycle strategy for the future. Here I’ll take the discussion up a notch, to talk about three ways that a modern approach to API management can fundamentally change[...] Read the full article here. The post Three ways API management transforms your organization appeared first on SnapLogic.
Getting Started with Talend Open Studio: Building a Complex tMap Job
In our previous blog, we walked through a simple job moving data from a CSV file into a Snowflake data warehouse. In this blog, we will explore some of the more advanced features of the tMap component. Similar to the last blog, you will be working with customer data in a CSV file and writing out to a Snowflake data warehouse; however, you will also be joining your customer CSV file with transaction data. As a result, you will need Talend Open Studio for Data Integration, two CSV data sources that you would like to join (in this example we use customer and transaction data sets), and a Snowflake warehouse for this tutorial. If you would like to follow a video-version of this tutorial, feel free to watch our on-demand webinar and skip to the fourth video. First, we will join and transform customer data and transaction data. As you join the customer data with transaction data, any customer data that does not find matching transactions will be pushed out to a tLogRow component (which will present the data in a Studio log following run time). The data that is successfully matched will be used to calculate top grossing customer sales before being pushed out into a Sales Report table within our Snowflake database. Construct Your Job Now, before beginning to work on this new job, make sure you have all the necessary metadata configurations in your Studio’s Repository. As demonstrated in the previous blog (link to blog #2), you will need to import your Customer metadata, and you will need to use the same process to import your transaction metadata. In addition, you will need to import your Snowflake data warehouse as mentioned in the previous blog if you haven’t done so already. So that you don’t have to start building a new job from scratch, you can take the original job that you created from the last blog (containing your customer data, tMap and Snowflake table) and duplicate it by right-clicking on the job and selecting Duplicate from the dropdown menu. Rename this new job – in this example we will be calling the new job “Generate_SalesReport”. Now in the Repository you can open the duplicated job and begin adjusting the job as needed. More specifically, you will need to delete the old Snowflake output component and the Customers table configuration within t-Map. Once that is done, you can start building out the new flow. Start building out your new job by first dragging and dropping your Transactions metadata definition from the Repository onto the Design Window as a tFileInputDelimited component, connecting this new component to the tMap as a lookup. An important rule-of-thumb to keep in mind when working with the tMap component is that the first source connected to a tMap is the “Main” dataset. Any dataset linked to the tMap after the “Main” dataset is considered a “Lookup” dataset. At this point it is a good idea to rename the source connections to the tMap. Naming connections will come in handy when it’s time to configure the tMap components. To rename connections, perform a slow double-click on the connection arrow. The name will become editable. Name the “Main” connection (the Customer Dataset) “Customers” and the “Lookup” connection (the Transactions dataset) “Transactions”. Later, we will come back to this tMap and configure it to perform a full inner join of customer and transaction data. For now, we will continue to construct the rest of the job flow. To continue building out the rest of the job flow, connect a tLogRow component as an output from the tMap (in the same way as discussed above, rename this connection Cust_NoTransactions). This tLogRow will capture customer records that have no matching transactions, allowing you to review non-matched customer data within the Studio log after you run your job. In a productionalized job flow, this data would be more valuable within a database table making it available for further analysis, but for simplicity of this discussion we will just write it out to a log. The primary output of our tMap consists of customer data that successfully joins to transaction data. Once joined, this data will be collected using a tAggregateRow component to calculate total quantity and sales of items purchased. To add the tAggregateRow component to the design window, either search for it within the Component Pallet and then drag and drop it into the Design Window OR click directly in the design window and begin typing “tAggregateRow” to automatically locate and place it into your job flow. Now, connect your tAggregateRow to the tMap and name the connection “Cust_Transactions”. Next, you will want to sort your joined, aggregated data, so add the tSortRow component. In order to map the data to its final destination–your Snowflake target table—you will need one more tMap. To distinguish between the two tMap components and their intended purposes, make sure to rename this tMap to something like “Map to Snowflake”. Finally, drag and drop your Snowflake Sales Report table from within the Repository to your Design window and ensure the Snowflake output is connected to your job. Name that connection “Snowflake” and click “Yes” to get the schema of the target component. As a best practice, give your job a quick look over and ensure you’ve renamed any connections or components with clear and descriptive labels. With your job constructed, you can now configure your components. Configuring Your Components First, double-click to open the Join Data tMap component configuration. On the left, you can see two source tables, each identified by their connection name. To the right, there are two output tables: one for the customers not matched to any transactions and one for the joined data. Start by joining your customers and transactions data. Click and hold ID from within the Customers table and drag and drop it onto ID from within the Transactions table. The default join type in a tMap component is a Left Outer Join. But you will want to join only those customer Id’s that have matching transactions, so switch the Join Model to an “Inner Join”. Within this joined table, we want to include the customer ID in one column and the customers’ full names on a separate column. Since our data has first name and last name as two separate columns, we will need to join them, creating what is called a new “expression”. To do this, drag and drop both the “first_name” and “last_name” columns onto the same line within the table. We will complete the expression in a bit. Similarly, we want the Quantity column from the transaction data on its own line, but we also want to use it to complete a mathematical expression. By dragging and dropping Unit Price and Quantity onto the same line within the new table, we can do just that. You can now take advantage of the “Expression Builder”, which gives you even more control of your data. It offers a list of defined pre-coded functions that you can apply directly to this expression—I highly recommend that you look through the Expression Builder to see what it can offer. And even better, if you know the Java code for your action, you can enter it manually. In this first case, we want to concatenate the first and last names. After adding the correct syntax within the expression builder, click Ok. You will want to use the Expression Building again for your grouped transaction expression. With the Unit Price and Quantity expression, complete an arithmetic action to get the total transaction value by multiplying the Unit Price by the Quantity. Then, click Ok. Remember, we set our Join Model to an Inner Join. However, Talend offers a nice way to capture just the outer customers whom didn’t have transactions. To capture these “rejects” from an Inner Join, first drag and drop ALL the fields from the customers table to the Cust_NoTransactions output table. Then, select the tool icon at the top right of this table definition and switch the “Catch lookup inner join reject” to “true”. With the fields properly mapped, it is time to move on and review the data below. Rename the first_name field to be simply “name” (since it now includes the last name) and rename the Unit Price column to “transaction cost” (since it now has the mathematical expression applied). Then, ensure no further adjustments are necessary to the table’s column types to avoid any mismatched type conflicts through the flow. With this tMap properly configured, click Ok. And then click “Yes” to propagate the changes. Next, you will need to configure the Aggregate component. To do this, enter the Component Tab (below the Design Workspace) and edit the schema. To properly configure the output schema of my tAggregateRow component, first choose the columns on the left that will be grouped. In this case we want to group by ID and Name. So, select “id” and “name” and then clicking the yellow arrow button pointing to the right. Next, we want to create two new output columns to store our aggregated values. By clicking the green “+” button below the “Aggregate Sales (Output)” section you can add the desired number of output columns. First, create a new output column for the total quantity (“total_qty”) and identify it as an Integer type. And then create another for the total sales (“total_sales”) and set it as a double type. Next, click ok, making sure to choose to propagate the changes. With the output schema configured properly within the tAggregateRow component, we can now configure the Group By and Operations Sections of the tAggregateRow component. To add your two Grouped By output columns and two Operations ouput columns, go back to the Component Tabs. Click the green plus sign below the Group By section twice and the Operations section twice to account for the output columns configured in the tAggregateRow schema. Then, in the Operations section, set the “total_qty” column’s general function as “sum” and identify the input column position as “qty”. This configures the tAggregateRow component to add all the quantities from the grouped customer Id’s and output the total value in the “total_qty” column. Likewise, set the “total_sales” function as “sum” and input column position as “transaction_cost”. Next, head to the sorting component and configure it to sort by total sales to help us identify who our highest paying customers are. To do this, click on the green “+” sign in the Component Tab, select “total_sales” in the Schema Column, and select “num” to ensure that your data is sorted numerically. Last, choose “desc” so your data will be shown to you in descending order. Now, configure your final tMap component, by matching the customer name, total quantity and total sales. Then click Ok and click Yes to propagate the changes. Finally, make sure your tLogRow component is set to present your data in table format, making it easier for you to read the inner join reject data. Running Your Job At last, you are ready to run your job! Thanks to the tLogRow component, within the log, you can see the six customers that were NOT matched with transaction data. If you head to Snowflake, you can view your “sales_report” worksheet and review the top customers in order of highest quantity and sales. And that’s how to create a job that joins different sources, captures rejects, and presents the data the way you want it. In our next blog, we will be going through running and debugging your jobs. As always, please comment and let us know if there are any other basic skills you would like us to cover in a tutorial. The post Getting Started with Talend Open Studio: Building a Complex tMap Job appeared first on Talend Real-Time Open Source Data Integration Software.
Joomla 3.9.1 Release
Joomla 3.9.1 is now available. This is a bug fix release for the 3.x series of Joomla including over 40 bug fixes and improvements.
Overriding the Favicon in DXP
So perusing the web, you'll notice there's a couple of resources out there explaining how to override the default favicon in Liferay. Of course, the standard way is to add it to /images folder of your theme and apply the theme to the site. As long as you got your path right and it's the default name, "favicon.ico," it should work just fine.
While this satisfies most use cases, what if you have more strict business requirements? For example, "Company logo must be displayed at all times while on the site," which includes while navigating on the control panel which has it's own theme. You might be inclined then to look into the tomcat folder and simply replace the file directly for all the themes. While this would work, this is not very maintainable since you need to do this every time you deploy a new bundle (which can be very frequent depending on your build process) and for every instance of Liferay you have running.
With DXP, we've introduced another way to override certain JSPs
through dynamic
includes. Although not every JSP can be overridden this way,
luckily for us, top_head.jsp (where
the favicon is set) can be. There are a few things you need to do when
creating your dynamic include for this.
First thing is you're going to want to register which extension point
you'll be using for the JSP:
/html/common/themes/top_head.jsp#post
You're going to use "post" and not "pre" because
the latest favicon specified is what is going to be rendered on the
page. In other words, by adding our favicon after the default one,
ours will take precedence.
Next you're going to need the line that replaces:
<link href="<%= themeDisplay.getPathThemeImages() %>/<%= PropsValues.THEME_SHORTCUT_ICON %>" rel="icon" />
For that, we need to specify the image we're going to be using,
which can be placed in our module's resource folder: src/main/resources/META-INF/resources/images.
Next, we need a way of accessing our module's resources, which can be
done with its web context path. In your bnd.bnd add the following line
with your desired path name. For mine, I'll just use /favicon-override:
Web-ContextPath: /favicon-override
With that in place, we should be able to build our url now. The
finished url will look something like: http://localhost:8080/o/favicon-override/images/favicon.ico. The
components include the portal url, /o (which is used to access an OSGi
module's resources), the web context path, and the path to the resource.
Putting it all together, your class should look something like this:
public class TopHeadDynamicInclude implements DynamicInclude {
@Override
public void include(
HttpServletRequest request, HttpServletResponse response, String key)
throws IOException {
PrintWriter printWriter = response.getWriter();
ThemeDisplay themeDisplay = (ThemeDisplay)request.getAttribute(
WebKeys.THEME_DISPLAY);
StringBundler url = new StringBundler(4);
url.append(themeDisplay.getPortalURL());
url.append("/o");
url.append(_WEB_CONTEXT_PATH);
url.append(_FAVICON_PATH);
printWriter.println("<link href=\"" + url.toString() + "\" rel=\"icon\" />");
}
@Override
public void register(DynamicIncludeRegistry dynamicIncludeRegistry) {
dynamicIncludeRegistry.register("/html/common/themes/top_head.jsp#post");
}
private static final String _WEB_CONTEXT_PATH = "/favicon-override";
private static final String _FAVICON_PATH = "/images/favicon.ico";
}
That's all there is to it! You could also add additional
println
statements to cover cases for mobile devices. The
beauty of this solution is that you have a deployable module that can
handle the configuration as opposed to manually doing it yourself
which is prone to error. Also, by using dynamic includes, it's more
maintainable when you need to upgrade to the next version of Liferay.
Github Repository: https://github.com/JoshuaStClair/favicon-override
Joshua St. Clair
2018-11-26T18:33:00Z
