AdoptOS

Assistance with Open Source adoption

Open Source News

Fair Voting in Canada: In it for the long haul with CiviCRM

CiviCRM - Wed, 10/31/2018 - 10:56

This month, British Columbians (in Canada) will be voting on whether to adopt proportional representation (PR) when electing their provincial representatives. If they do, they'll be the first province in Canada to do so.

Fair Vote Canada has been advocating for proportional representation in Canada's voting systems for more than 10 years, and I've been working with them for almost as long, using CiviCRM. 

Categories: CRM

Anonymize your custom entities and comply with GDPR the easy way!

Liferay - Wed, 10/31/2018 - 09:54

Helping my colleague Sergio Sanchez with his GDPR talk in the past Spanish Symposium, I came across a hidden gem in Liferay 7.1. It turns out you can integrate custom Service Builder entities with Liferay’s GDPR framework (UAD), to enable anonymization of personal data in your own entities.

I didn’t know anything about this feature, but it’s really easy to use and works like a charm (after solving a couple “gotchas”!). Some of this gotchas have been identified in LPS's and will be mentioned in the article.

Let’s take a look at how it’s done!

Service Builder to the rescue!

The first step is to create a Service Builder project, using the tool you like most (Blade, Liferay IDE...). This will create two projects, as usual, API and service, and you'll find the first hiccup. Blade doesn’t generate the service.xml file using the 7.1 DTD, it still uses 7.0, so the first thing we need to do is update the DTD to 7.1:

<!DOCTYPE service-builder PUBLIC "-//Liferay//DTD Service Builder 7.1.0//EN" "http://www.liferay.com/dtd/liferay-service-builder_7_1_0.dtd">

This issue is being tracked in LPS-86544.

The second “gotcha” is that when you update the DTD to version 7.1 Service Builder will generate code that won't compile or run with Liferay CE 7.1 GA1. To make it compile, you need to add a dependency to Petra and update the kernel to at least version 3.23.0 (that's the kernel version for Liferay CE 7.1 GA2, not released yet), but unfortunately won't run if you deploy it to a Liferay CE 7.1 GA1 instance.

Thanks to Minhchau Dang for pointing this out, Minhchau filed this bug in ticket LPS-86835.

I ended up using these in my build.gradle (of the -service project), I'm using Liferay DXP FP2:

dependencies { compileOnly group: "biz.aQute.bnd", name: "biz.aQute.bndlib", version: "3.5.0" compileOnly group: "com.liferay", name: "com.liferay.portal.spring.extender", version: "2.0.0" compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "3.26.0" compileOnly group: "com.liferay", name: "com.liferay.petra.string", version: "2.0.0" }

Also note that all across the example in Github I've used kernel version 3.26.0, not only in the -service project, but everywhere.

What exactly do you want to anonymize?

Now your service and api projects should be ready to compile, so the next step is to include the necessary information in your service.xml to make your entities anonymizable.

The first two things you need to include are two attributes at the entity level, uad-application-name, and uad-package-path. uad-application-name is the name that Liferay will use to show your application in the anonymization UI, and uad-package-path is the package Service Builder will use to create the UAD classes (Pro tip: don’t include “uad” in the package name as I did, SB will include it for you)

In my example I used this entity:

<entity local-service="true" name="Promotion" remote-service="true" uuid="true" uad-application-name="Citytour" uad-package-path="com.liferay.symposium.citytour.uad">

Once you have specified those, you can start telling Service Builder how your entity’s data will be anonymized. For this, you can use two attributes at a field level: uad-anonymize-field-name and uad-nonanonymizable.

Uad-anonymize-field-name, from the 7.1 service.xml DTD:

“The uad-anonymize-field- name value specifies the anonymous user field that
should be used to replace this column's value during auto anonymization. For
example, if "fullName" is specified, the anonymous user's full name will replace
this column's value during auto anonymization. The uad-anonymize-field-name
value should only be used withuser name columns (e.g. " statusByUserName ").”

For example, if we have a field defined like this:

<column name="userName" type="String" uad-anonymize-field-name="fullName"/>

That means that when the auto-anonymization process runs, it will replace the value of that field with the Anonymous User Full Name.

On the other hand, uad-nonanonymizable, again from the 7.1 DTD:

“The uad- nonanonymizable value specifies whether the column represents data
associated with a specific user that should be reviewed by an administrator in
the event of a GDPR compliance request. This implies the data cannot be
anonymized automatically.”

This means exactly that, the field can’t be auto-anonymized and needs manual revision when deleting user data. That’s partially true because even though the anonymization is not automatic, the admin user doesn’t have to actually do anything, just review the entities and click “anonymize” (providing the anonymization process for the entity is implemented, which we’ll do later on).

I used this fields in my “Promotion” entity:

<!-- PK fields --> <column name="promotionId" primary="true" type="long" /> <!-- Group instance --> <column name="groupId" type="long" /> <!-- Audit fields --> <column name="companyId" type="long" /> <column name="userId" type="long" /> <column name="userName" type="String" uad-anonymize-field-name="fullName"/> <column name="createDate" type="Date" /> <column name="modifiedDate" type="Date" /> <!-- Promotion Data --> <column name="description" type="String" uad-nonanonymizable="true"/> <column name="price" type="double" uad-nonanonymizable="true"/> <column name="destinationCity" type="String" uad-nonanonymizable="true"/> <!-- Personal User Data --> <column name="offereeFirstName" type="String"/> <column name="offereeLastName" type="String"/> <column name="offereeIdNumber" type="String" /> <column name="offereeTelephone" type="String" />

In my case, I’m telling the framework that the description, price, and destinationCity fields need
manual review.

So, what does SB do with this two attributes? Actually, if we have an entity with a field marked with uad-anonymize-field-name, when running buildService it will create two new projects to hold the anonymization, display, and data export logic! Isn't Service Builder awesome?

Build Service Builder Services using buildService

Excellent, you’re almost there! Now you’re ready to run the buildService gradle task, and you should see that SB has created two projects for you: <entity>-uad, and <entity>-uad-test. The <entity>-uad project (promotions-uad in my case) contains the custom anonymization and data export logic for your entity, and -uad-test contains well, the test classes.

And now that we have both projects, we must fix another “gotcha”. If you run the build or deploy gradle tasks now on those projects, they will fail spectacularly. Why? Well, if you take a look at your projects, you’ll see that there’s no build.gradle file! That’s Service Builder being a little petty, but don’t worry, you can create a new one and include these dependencies (again, based on my system):

dependencies { compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "3.26.0" compileOnly group: "com.liferay", name: "com.liferay.user.associated.data.api", version: "3.0.1" compileOnly group: "com.liferay", name: "com.liferay.petra.string", version: "2.0.0" compileOnly group: "org.osgi", name: "osgi.cmpn", version: "6.0.0" compileOnly project(":modules:promotions:promotions-api") }

This bug is being tracked in LPS-86814

Now your project should compile without issue, let’s add our custom anonymization logic!

Customize to anonymize

In your -uad project you’ll find three packages (actually four if you count constants):

  • Anonymizer: Holds the custom anonymization logic for your entity.
  • Display: Holds the custom display logic for Liferay’s UAD UI.
  • Exporter: Holds the custom personal data export logic.

In each one, you’ll find a Base class and another class that extends the Base class. For the anonymizer package in my example, I have a BasePromotionUADAnonymizer class and a PromotionUADAnonymizer class. What we’ll do is use the concrete class (PromotionUADAnonymizer in my case) and override the autoAnonymize method. In this method, you tell the framework how to anonymize each field of your custom entity. In my example I did this:

@Component(immediate = true, service = UADAnonymizer.class) public class PromotionUADAnonymizer extends BasePromotionUADAnonymizer { @Override public void autoAnonymize(Promotion promotion, long userId, User anonymousUser) throws PortalException { if (promotion.getUserId() == userId) { promotion.setUserId(anonymousUser.getUserId()); promotion.setUserName(anonymousUser.getFullName()); promotion.setOffereeFirstName(anonymousUser.getFirstName()); promotion.setOffereeLastName(anonymousUser.getLastName()); promotion.setOffereeIdNumber("XXXXXXXXX"); promotion.setOffereeTelephone("XXX-XXX-XXX"); } promotionLocalService.updatePromotion(promotion); } }

I’m using the anonymousUser fields for my entities’ first and last name, and setting the ID and telephone fields to anonymous entries, but you can choose what fields you want to anonymize and how.

Done!

Good job, you’re done! Now your entities are integrated with Liferay’s UAD framework, and when deleting a user, you’ll see that it works like a charm. To see how UAD works from a user perspective, you can check out the Managing User Data chapter in the documentation.

I’ve uploaded this example in my GitHub repo, along with a web portlet to create Promotions. You just need to add the promotions-web portlet to the page, create some promotions with a user different from Test Test and then delete the user, you should see that all entries created by that user have now anonymous fields.

Hope you like it: https://github.com/ibairuiz/sympo-demo

Ibai Ruiz 2018-10-31T14:54:00Z
Categories: CMS, ECM

Machine learning has a data integration problem: The need for self-service

SnapLogic - Tue, 10/30/2018 - 12:54

When we built the Iris Integration Assistant, an AI-powered recommendation engine, it was SnapLogic’s first foray into machine learning (ML). While the experience left us with many useful insights, one stood out above the rest: machine learning, we discovered, is full of data integration challenges. Of course, going into the process, we understood that developing[...] Read the full article here.

The post Machine learning has a data integration problem: The need for self-service appeared first on SnapLogic.

Categories: ETL

Continuous Integration Best Practices – Part 2

Talend - Tue, 10/30/2018 - 11:36

This is the second part of my blog series on CI/CD best practices. For those of you who are new to this blog, please refer to Part 1 of the same series and for those who want to see the first 10 best practices. Also, I want to give a big thank you for all the support and feedback! In my last blog, we saw the first ten best practices when working with Continuous integration. In this blog, I want to touch on some more best practices. So, with that, let’s jump right in!

Best Practice 11 – Run Your Fastest Tests First

The CI/CD system serves as a channel for all changes entering your system and hence discovering failures as early as possible is important to minimize the resources devoted to problematic builds. It is recommended you to prioritize and run your fastest tests first and save the complex, long-running tests for later. Once you have validated the build with smaller, quick-running tests and have ensured that the build looks good initially, you could test your complex and long-running test cases.

Following this best practice will keep your CI/CD process healthy by enabling you to understand the performance impact of individual tests as well as complete most of your tests early. It also increases the likelihood of fast failures. Which means that problematic changes can be reverted or fixed before blocking another members’ work.

Best Practice 12 – Minimize Branching

One of the version control best practices is integrating changes into the parent branch/shared repository early and often. This helps avoid costly integration problems down the line when multiple developers attempt to merge large, divergent, and conflicting changes into the main branch of the repository in preparation for release.

To take advantage of the benefits that CI provides, it is best to limit the number and scope of branches in your repository. It is suggested that developers merge changes from their local branches to remote branch at least once a day. The branches which are not being tracked by your CI/CD systems contain untested code and it should be discarded as it is a threat to the stable code.

Best Practice 13 – Run Tests Locally

Another best practice (and it is related to my earlier point about discovering failures early) is that teams should be encouraged to run as many tests locally prior to committing to the shared repository. This ensures that the piece of code you are working on is good and it will also make it easy to recognize unforeseen problems after integrating the code to the master code.

Best Practice 14 – Write extensive test cases

To be successful with CI/CD one needs a test suit to test every code which gets integrated. As a best practice it is recommended to develop a culture of writing tests for each code change which is integrate into the master branch. The test cases should include unit, integration, regression, smoke tests or any kind of test which covers the end to end project.

Best Practice 15 – Rollback

This is probably the most important part of any implementation. As a next best practice, always think of an easy way to roll back the changes if something goes wrong. Normally I have seen organizations doing a rollback via redeploying the previous release or redoing the build from the stable code.

Best Practice 16 – Deploy the Same Way Every Time

I have seen organizations having multiple ways of CI/CD pipeline. It varies from using multiple tools to multiple mechanisms. Though not a very prominent one but as a best practice, I would say deploy the code same way every time to avoid unnecessary issues like configuration and maintenance across environments. If you are using same deploy method every time, then the same set of steps are triggered in all environments making it easier to understand and maintain. 

Best Practice 17 – Automate the build and deployment

Although the manual build and deployment can work, I would recommend you automate the pipeline. Automation to start with eliminates all the manual errors but further to that ensures that the development team only works on the latest source code from the repository and compiles it every time to build the final product. With lots of tools available like Jenkins, Bamboo, BitBucket etc it is very easy to automate the create the workspace, compile the code, convert it to binaries and publish it to Nexus.

Best Practice 18 – Measure Your Pipeline Success

It is a good practice to track the CI/CD pipeline success rate. You could measure these both ways, before & after you start automation and compare the results. Although the metrics for evaluating CI/CD success rate depends on organizations, some of the points to be considered are:

  • Number of jobs deployed monthly/weekly/daily to Dev/TEST/Pre-PROD/PROD
  • Number of Successful/Failure Builds
  • Time is taken for each build
  • Rollback time
Best Practice 19 – Backup

Your CI/CD pipeline has lot of process/steps involved and as the next best practice I recommend you take periodic backup of your pipeline. If using Jenkins, this can be accomplished via Backup Manager as shown below.

Best Practice 20 – Clean up CI/CD environment

Lots of builds can make your CI/CD system too clumsy and over a period of time and it might impact the overall performance. As the next best practice, I recommend you to clean the CI/CD server periodically. This cleaning could include CI pipeline, temporary workspace, nexus repositories etc.

Conclusion

And with this, I come to an end of the two-part series blog. I hope these best practices are helpful and you would embed these while working with CI/CD.  

The post Continuous Integration Best Practices – Part 2 appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Joomla 3.9 is live!

Joomla! - Tue, 10/30/2018 - 08:45

It’s a good day for the Joomla Project, as today we proudly announce the release of Joomla 3.9 – ‘The Privacy Tool Suite’ - marking the tenth minor release in the 3.x series.

Categories: CMS

Progress in San Francisco's Open Source Elections Project

Open Source Initiative - Mon, 10/29/2018 - 18:50

As many within the OSI community may know, the OSI has been following closely—and supporting—San Francisco's efforts to develop an open source elections system. Below is an update from Commissioner Chris Jerdonek of the San Francisco Elections Commision.

1. San Francisco Hiring a Project Manager!

San Francisco is now hiring a Senior Technical Project Manager to lead the open source voting project! This is a major step forward for the project. The salary is $163K / year. The position was first posted on Tuesday, August 28 and will stay open until filled (but you should try to apply ASAP if possible). Here is the job posting: https://jobapscloud.com/SF/sup/BulPreview.asp?R1=TEX&R2=5504&R3=088534

Help spread the word about this key position so the City can get good applicants! Here's a tweet you can retweet: https://twitter.com/SFOpenVoting/status/1038176690189492224

2. Applications Now Open for Open Source Voting Technical Advisory Committee (TAC)

The SF Elections Commission is now accepting applications to fill a vacancy on its 5-member Open Source Voting Technical Advisory Committee, which meets monthly. Here are the application instructions (applications are due Monday, September 24): https://sfgov.org/electionscommission/osvtac-app

Also help spread the word about this opportunity! Here's a tweet you can retweet: https://twitter.com/SFElectionsComm/status/1038168125961916417

Below are some other key developments since May:

3. More Money Budgeted for the Open Source Voting Project

Thanks to the leadership of Supervisor (and Budget Committee Chair) Malia Cohen and Mayor London Breed, San Francisco allocated an additional $1.255 million towards the open source voting project late in the City's annual budget process. Mayor Breed signed the City's new budget on August 1.

This extra money will let SF start actual development of the voting system sooner, instead of having to wait another year. This brings the total amount of money allocated for the project to $1.68 million. The breakdown is as follows:

  • $125K left over from the $300K allocated in 2016
  • $300K allocated by COIT this spring
  • $1.255M allocated late in the budget process ($660K for 2018-19 and $595K for 2019-20)

Thanks also to the California Clean Money Campaign for being instrumental in organizing public support for this additional funding.

4. Civil Grand Jury Report

On June 29, 2018, the SF Civil Grand Jury issued a 48-page report on the status of the open source voting project after studying it for a year (focusing in particular on why it's not happening faster). The report can be found here, along with the press release they issued: http://civilgrandjury.sfgov.org/report.html

5. Elections Commission Resolution #2

On June 20, 2018, the Elections Commission unanimously passed a second resolution on Open Source Voting. This resolution can be viewed as an update or "sequel" to the resolution the Commission passed in November 2015, taking into account the events and developments of the past few years. This new resolution (as well as the first one) can be found here: https://sfgov.org/electionscommission/motions-and-resolutions

Thanks for reading and for helping to spread the word!

--Chris

Categories: Open Source

LiferayPhotos: a fully functional app in 10 days using Liferay Screens. Final part: Development experience.

Liferay - Mon, 10/29/2018 - 12:06

In the previous posts, we talked about what we were going to do, how we were going to do it, and which tools we were going to use. We introduced you the Liferay Screens’s screenlets and the reason why we were going to use them.

Well, it’s time to explain to you how we developed this application and how a junior developer, with only one year of experience in the mobile world and 2 weeks in Liferay, made this app in 10 days thanks to Liferay Screens and its screenlets. In this post, I will explain what I have done and how have been my experience.

Introduction

I am going to detail as much as possible how we have done the UserProfile screen, where the user can see their photos next to their basic information and the Feed screen, which shows a timeline with the users’s photos. Finally, I will show you how we have implemented the button that allows upload new images, either taking a new photo from the camera or getting it from the gallery. Also, I want to explain what problems encountered when I was doing the development.

First of all, as a starting point, it’s necessary to be familiarized with creating iOS themes. All of this is explained in this tutorial. They are very useful because, as you will see, to do the app we only have to change the screenlet’s UI because the logic is already done.

UserProfile screen

For the UserProfile screen I’ve created a collection of new themes for the screenlets used in this screen. Basically, what I have done is to extend the default theme, which has the name of the screenlet but ending in View_default, e.g., UserPortraitView_default.

The new theme should be given the same name than the default theme but changing the suffix to the name that we want to give them. For example, in UserProfile we modify the UserPortraitView to show the image inside a circle. For this, as I said before, we just need to copy the default theme and change the name, in my case to UserPortraitView_liferayphotos. Thus, the theme name will be liferayphotos. You have to put this name into the screenlet’s themeName attribute.

The associated code to the view, created in an analogous way, is where I have coded how the image will be displayed, in our case and as I said above, will appear inside a circle shape.

import Foundation import LiferayScreens class UserPortraitView_liferayphotos: UserPortraitView_default { override func onShow() { super.onShow() self.portraitImage?.layer.cornerRadius = self.portraitImage!.frame.size.width / 2 self.portraitImage?.clipsToBounds = true self.borderWidth = 0.0 self.editButton?.layer.cornerRadius = self.editButton!.frame.size.width / 2 self.screenlet?.backgroundColor = .clear } }

Just below of the UserPortraitScreenlet I have added the username and the user email. Both are a simple label text.

In the back of the user basic information, I’ve placed an image using an ImageDisplayScreenlet. This will let you render an image from documents and media, pretty straightforward.

Now, I am going to explain the image gallery. Maybe, this is the trickiest part in this screen but, with screenlets, nothing is difficult :)

To do this I’ve used an ImageGalleryScreenlet, but, as I did with the UserPortraitScreenlet, I have created a custom theme to give it a more appropriate style. This style consists of 3 columns with a little small space between columns and rows.

First of all, I had to specify the number of columns because the default number didn’t fit our design. It’s just one variable called columnNumber:

override var columnNumber: Int { get { return 3 } set {} }

The next step is to specify which layout we needed to use in the gallery. I needed to calculate his size and the space between the other items. All of this was done in the function doCreateLayout().

override func doCreateLayout() -> UICollectionViewLayout { let layout = UICollectionViewFlowLayout() let cellWidth = UIScreen.main.bounds.width / CGFloat(columnNumber) - spacing layout.sectionInset = UIEdgeInsets(top: 0, left: 0, bottom: 0, right: 0) layout.itemSize = CGSize(width: cellWidth, height: cellWidth) layout.minimumInteritemSpacing = spacing layout.minimumLineSpacing = spacing return layout }

And that’s all in this screen. With three screenlets I have made a fully functional user profile.

Feed screen

Honestly, I thought that Feed screen was the most complicated screen. In fact, all screen is just one big screenlet: the imageGalleryScreenlet. But how? Well, I have to show all images, so, it’s an image gallery and the rest information is added to the cell of each item of the gallery. 

As we mentioned in the previous post, in the cell I have two screenlets: asset display screenlet and rating screenlet.

First of all, I have created a theme for the image gallery screenlet. In this theme I’ve specified that I only need one column, as I did in the previous screen; then I give to the screenlet the item size and pass the custom cell name.

Already in the custom cell, I had to build my personal cell with the screenlets mentioned above. For the asset display screenlet, I’ve created a simple view with the user portrait screenlet and a label with the username.

For the rating screenlet, I’ve used another custom view with a new image (a heart) and with the number of likes of the image.

The rest of the cell consists of one image and some labels to give the aspect that we wanted.

As we can see, both screens were made in the same way: creating themes to give a more appropriate style and nothing else. All logic was done in the screenlet and I don’t have to worry about it.

Other screens

The other screens were far more easy than the mentioned here because the view was less complex, and I just had to tweak the style of the screenlets a little bit. For example, the sign up screenlet and the login screenlet follow the same style, as we can see in the pictures below.

                                                     

Final result

In the next video you can see the result of the application.

Conclusion

As we can see along this post, the principal and more complex logic of the application is already provided by the screenlet. I have to focus only on customizing the screenlets creating new themes and nothing else. The new themes depend on the platform to develop, of course, so we’ve needed iOS knowledge in this case.

Screenlets are a powerful tool to create native applications integrating with Liferay and very easy to use.

My humble recommendation is that if you never have used screenlets technology, create a little app as proof of concept to familiarize yourself with it. This task will only take a couple of hours and will help you to gain knowledge about screenlets and to observe how powerful they are.

With this little knowledge and some experience (not much) in the mobile platform, you will be ready to develop an application like the one mentioned in this blog, because all screenlets are used in a similar way.

Related links

Github repository: https://github.com/LuismiBarcos/LiferayPhotos

Luis Miguel Barco 2018-10-29T17:06:00Z
Categories: CMS, ECM

Getting Started with Talend Open Studio: Preparing Your Environment

Talend - Mon, 10/29/2018 - 09:56

With millions of downloads, Talend Open Studio is the leading open source data integration solution. Talend makes it easy to design and deploy integration jobs quickly with graphical tools, native code generation, and hundreds of pre-built components and connectors. Sometimes people like to get some more resources they can use to get started with Talend Open Studio, so we have put some of them together in this blog and in a webinar on-demand titled “Introduction to Talend Open Studio for Data Integration.”

In this first blog, we will be discussing how to prepare your environment to ensure you have a smooth download and install process. Additionally, we will go through a quick tour of the tool so that you can what it has to offer as you begin to with Talend Open Studio.

Remember, if you want to follow along with us in this tutorial you can download Talend Open Studio here.

Getting Ready to Install Talend Open Studio

Before installing, there are a couple prerequisites to address: First, make sure you have enough memory and disk space to complete the install (see the documentation for more information). Second, make sure that you have downloaded the most recent version of the Java 8 JDK Oracle (as found within the Java SE Runtime Environment 8 Downloads page on their website) as Java 9 is not yet supported.

If you’re strictly a Mac user, you need to have Java 8 version 151 installed. To find out which version you have currently on your machine, open your command prompt and search for java-version. Here, you can see that Java 8 is already installed.

You can also discover the version a couple other ways, including from within the “About Java” section from within Java’s control panel.

So now we need to set up a JAVA_HOME environment variable. To do this, right click on This PC and open the properties. Then select Advanced System Settings and under Environment Variables, click New to create a new variable. Name your new variable JAVA_HOME, enter the path to your Java environment and click ok. Under System Variables, add the same variable information and then click ok.

Installing Talend Open Studio

Alright, now the environment is ready for an Open Studio install. If you haven’t already, go to Talend’s official download page and locate the “download free tool” link for Open Studio for Data Integration. Once the installation folder is downloaded, open it and save the extracted files to a new folder. To follow best practice, create a new folder within your C drive. From here you can officially begin the install.

Once it’s finished, open the newly created TOS folder on your C drive and drill in to locate the application link you need to launch Open Studio. If you have the correct java version installed, have enough available memory and RAM and completed all the prerequisites you should easily be able to launch Talend Open Studio on your machine.

When launching the program for the first time, you are presented with the User License Agreement, which you need to read and accept the terms. Now you’re given the chance to create a new project, import a demo project or import an existing project. To explore pre-made projects, feel free to import a demo project or to start with your own project right away, create a new project.  

Upon opening Studio for the first time, you will need to install some required additional Talend Packages—specifically the package containing all required third-party libraries. These are external modules that are required for the software to run correctly. Before clicking Finish, you must accept all licenses for each library you install.

Getting to Know Your New Tool

Next, let’s walk through some of Open Studio’s features. The initial start-up presents a demo project for you to play around with in order to get familiar with Studio. To build out this project, we need to start within the heart of Open Studio: the Repository. The Repository—found on the left side of the screen— is where data is gathered related to the technical items used to design jobs. This is where you can manage metadata (database connections, database tables, as well as columns) and jobs once you begin creating them.  

You can drag and drop this metadata from the Repository into the “Design Workspace”. This is where you lay out and design Jobs (or “flows”). You can view the job graphically within the Designer tab or use the Code tab to see the code generated and identify any possible errors.

To the right, you can access the Component Pallet, which contains hundreds of different technical components used to build your jobs, grouped together in families. A component is a preconfigured connector used to perform a specific data integration operation. It can minimize the amounts of hand-coding required to work on data from multiple heterogeneous sources.

As you build out your job, you can reference the job details within the “Component Tabs” below the Design Workspace. Each following tab displays the properties of the selected element within the design workspace. It’s here that component properties and parameters can be added or edited. Finally, next to the Components Tab, you can find the Run tab, which lets you execute your job. We hope this has been useful, and in our next blog, we will build a simple job moving data into a cloud data warehouse. Want to see more tutorials? Comment below to share what videos would be most helpful to you when starting your journey with Talend Open Studio for Data Integration.

The post Getting Started with Talend Open Studio: Preparing Your Environment appeared first on Talend Real-Time Open Source Data Integration Software.

Categories: ETL

Binding Java and Swift libraries for Xamarin projects

Liferay - Mon, 10/29/2018 - 09:14

Previously, we talked about how market is really fragmented right now because there isn’t any popular hybrid framework. We kept an eye on some of the most popular frameworks like React Native, Nativescript, Ionic2 and Xamarin. We did a prototype with Nativescript and it worked fine but we think that the use of native libraries with this technology isn’t mature enough. If you want to know how these frameworks work underneath and their pros and cons, read this article.

Since I started with hybrid frameworks, I have another point of view of how other developers work with other technology or language and also how I feel working outside my comfort zone. We usually work with Java for Android, Swift for iOS and this year we built a hybrid solution for our framework using JavaScript and CSS. But, we have never tried some cross-compile framework. So, we decided to give Xamarin a chance!

“We want to help people develop their apps easier and faster. Liferay Screens is our solution. Since 2017, you don’t have to choose between native and hybrid, you can use both at the same time!”

When we met Xamarin

The team decided to implement a prototype in Xamarin, but we didn’t know this framework very well. So, first of all, we had to know what language we had to use, how it worked, and how we could use Liferay Screens using the native part, so we didn’t have to code everything again.

Official documentation

Xamarin has a lot of documentation and you have to read it at least twice, if not more, just to connect all the dots and realise what you’re doing. We need to know some things before starting:

When we took our first steps and wrote some C# code, we were prepared to start our binding library but…Surprise, surprise!

Officially, Xamarin does not support binding Swift libraries.

And now, what?

Ok, that’s awful, but it’s not the end of the world. Let’s investigate!

We talked with the community about binding Swift libraries and everybody seemed to be very lost. You would think that no one had done this before (some people in Objective-C, right, but not in Swift). I kept investigating and I found a post in StackOverflow related to what I wanted to do. This was really exciting and I did everything step by step to get my C# library. The post was moved to Medium: https://medium.com/@Flash3001/binding-swift-libraries-xamarin-ios-ff32adbc7c76

First binding draft

When we had our first draft of the library, with only two or three classes, we decided to test it. We created a new Xamarin project, installed the NuGet package and…It failed.

At this point, we read the documentation about error codes and binding troubleshooting. And we had to add this information to the previous one:

Putting everything together

At that moment, we had everything we really needed to implement our library. So, we started the whole implementation:

  1. Create the .aar (for Java binding) and .framework (for Swift binding) files from native code.
  2. Create an empty binding library for Android in Visual Studio and add the .aar. Mono framework will create the C# classes for us (as many as it can).
  3. Create a fat library for iOS. It will contain x86 and arm architectures.
  4. Use Objective Sharpie ( sharpie bind ) to autogenerate C# classes for iOS. We can create them from a Pod or from a .h file. This command will create two files: ApiDefinitions.cs and StrucsAndEnums.cs .
  5. Create an empty binding library for iOS in Visual Studio and add the files we created on step 4.
  6. Rebuild and check everything is ok. If there are some errors, we can check them directly in Visual Studio or in the documentation.

We worked hard and finally we created a binding library from our native one. This library is already in the NuGet website.

This was really amazing because we didn’t know anything about C#, nor Xamarin, nor binding a library and here we are, with our library released and ready to be used. Finally, we reached our goal and we created a library without implementing everything again. It was hard and it took almost 4 months but now everyone can use it and this is what we love most!

Binding troubleshooting

The steps to create a binding library are annoying and it’s very easy to get angry and not understand what’s going on, what’s the meaning of an error, why it’s not working and so on.

  1. Do not despair and take a deep breath.
  2. It’s necessary to know and use the “Xamarin steps” (this is what our team named it!). Those steps are: clean the project several times, open closed project, open closed the simulator, uninstall the app in the simulator, and the most important one, delete the obj and bin folders.
  3. Xamarin.iOS crashes unexpectedly without any error messages in the console: check the log file; on Mac OS, do this via the Console app; on Windows, use the Event Viewer. In the Mac OS app, you must click User Reports and then look for your app’s name. Note that there may be more than one log file.
  4. At the moment, it’s necessary to add  Swift runtime NuGet packages (available for Swift 3 and Swift 4) to our Xamarin project. This will increase the size of the final app.
Sample projects

It may also help to investigate Xamarin.Android and Xamarin.iOS sample apps developed by Liferay. Both are good examples of how to use the Screenlets, Views (Android), and Themes (iOS):

If you want to read more about Liferay Screens for Xamarin, you can read our official documentation: https://dev.liferay.com/es/develop/tutorials/-/knowledge_base/7-0/using-xamarin-with-liferay-screens

If you have any questions about how to install it or how to use it, just post it in our forums. We will gladly help you. And if you already have a binding library, share your experience with others.

Sarai Diaz 2018-10-29T14:14:00Z
Categories: CMS, ECM

First anniversary of the Liferay Spain User Group

Liferay - Sat, 10/27/2018 - 05:54

//The spanish version of this article can be found here: Primer aniversario de la comunidad Liferay en España, otra vez.

One year ago, on October 25, 2017, Liferay Community in Spain started again. During this year, we have made important achievements: we have joined a great group in all the events and we have called meetups regularity (usually last Wednesday every month). Only this year, Community has held 9 meetups, counting presentations and round tables, covering various topics:

When you compare the Liferay Symposium Spain 2018 agenda with the meetup topics, it’s clear that our Community had the chance to see many of the sessions first in exclusive. At this point we should especially thank the work and support of the Liferay team and especially my fellow co-organizers of the Liferay Spain User Group: Eduardo García, Javier Gamarra  and David Gómez have managed to prepare and get experts for each of the areas.

As in October was going to be a year since the User group retook its activity, and taking the chance of the Liferay Symposium in Spain happening in Madrid, we scheduled our monthly meetup at the same date to celebrate that first anniversary. Thus, the meetup was included by the Liferay team in the oficial agenda of the symposium as part of the celebration.

Fig. Screenshot of the Liferay Events App showing the LSUG meetup scheduled as part of the Symposium agenda.

 

In this last meetup, we made a review of what we have achieved and the main objectives for next year, highlighting:

  • Holding meetups in other locations, and even getting to create local groups (some attendees showed interest from Asturias and Seville)
  • Encouraging members to give talks
  • Planning other types of talks, more focused on different user levels, (introductory sessions, workshops, etc).
  • Using Forums again. They are very useful for Community, and they had lost importance
     

To commemorate the anniversary, the Liferay team gave away T-shirts to Community members.

   

We have also gained visibility from Liferay. Here we show two important examples. The first one, in the closing keynotes of Liferay Symposium 2018, and the second one with our honorary member.

    

If you want to join the Liferay Community, just join the Liferay Spain User Group in meetup and sign in the Liferay Community Slack where we have a dedicated #lug-spain channel. If you are reading this from outside Spain, you probably can find a local Liferay User Group near you or, if it does not exists, you can learn how to create a new LUG on your area.
 

Álvaro Saugar 2018-10-27T10:54:00Z
Categories: CMS, ECM

Primer año de la Comunidad Liferay en España, otra vez

Liferay - Sat, 10/27/2018 - 03:36

//The english version of this article can be found here:  First anniversary of the Liferay Spain User Group

El  25 de octubre se cumplió un año del comienzo de la nueva etapa de la Comunidad Liferay en España. En este año se han conseguido logros importantes, como juntarnos un grupo fijo en todos los eventos y tener  una regularidad en los meetups durante el año. En total, entre presentaciones y mesas redondas, se han hecho  9 encuentros, en los que se han llegado a tocar diversos temas:

Si vemos  las conferencias dadas y la agenda del Symposium Liferay 2018, se puede comprobar que, en la Comunidad, hemos tenido muchas primicias sobre los temas que se han expuesto. En este punto hay que agradecer especialmente  el trabajo y apoyo al equipo de Liferay y especialmente a quiénes  están involucrados en la Comunidad. Eduardo García, Javier Gamarra y David Gómez han conseguido preparar y traer a los responsables de cada uno de las áreas en las que se ha hablado.

Como en este mes de octubre se cumplía un año desde que retomamos la actividad del grupo de usuarios, se aprovechó el simposio de Liferay para adelantar la celebración y hacer coincidir la reunión, que incluso fue incluida en la agenda del simposio.

En esta  última reunión, hicimos un repaso de lo hecho y los objetivos principales para el próximo año, entre los que podemos destacar:

  • Meetups en otras localizaciones, incluso facilitar la creación de grupos locales (entre los asistentes se mostró interés en Asturias y Sevilla)
  • Participación en charlas de los miembros
  • Otro tipo de charlas, más enfocadas a usuario, workshop, etc.
  • También se remarcó mucho la necesidad que fomentar los foros, ya que habían perdido importancia.

Para conmemorar el aniversario, desde el equipo de Liferay, regalaron camisetas a los miembros de la comunidad.

      

También hemos conseguido visibilidad por parte de Liferay, como se puede ver en el cierre del simposio y con nuestro miembro de honor.

       

Si quieres unirte a la comunidad de Liferay en España, puedes hacerlo en el grupo de meetup y en el Slack de la comunidad donde tenemos un canal #lug-spain específico. Si no lees esto desde España, quizá puedas encontrar un grupo de usuarios de Liferay que te quede cerca y si no hay, siempre puedes empezar un LUG en tu zona.

Álvaro Saugar 2018-10-27T08:36:00Z
Categories: CMS, ECM

Three reasons why you need to modernize your legacy enterprise data architecture

SnapLogic - Fri, 10/26/2018 - 13:33

Previously published on itproportal.com. A system must undergo “modernization” when it can no longer address contemporary problems sufficiently. Many systems that now need overhauling were once the best available options for dealing with certain challenges. But the challenges they solved were confined to the business, technological, and regulatory environments in which they were conceived. Informatica,[...] Read the full article here.

The post Three reasons why you need to modernize your legacy enterprise data architecture appeared first on SnapLogic.

Categories: ETL

Customizing Liferay Navigation menus

Liferay - Fri, 10/26/2018 - 07:47

In Liferay 7.1 we introduced a new approach to the Liferay Navigation. New navigation doesn't depend on the pages tree, there is a possibility to create multiple navigation menus, to compose them using different types of elements and to assign them different functions. Out of the box, we provide 3 types of menu elements - Pages, Submenus and URLs.  But we don't want to limit the users with these 3 types and any developer can create own navigation menu element type by implementing a simple interface and in this article, I want to explain how to do that. 
As the idea for this example, I want to use a "Call back request" functionality, there are plugins for most of the modern CMS which allow creating a "Call back request" button and process the user's input in a specific way. The idea of the article is to provide more or less the same functionality(of course, simplified) for the Liferay Navigation menus.
Let's create a module with the following structure:

The key parts here are CallbackSiteNavigationMenuItemType class which implements SiteNavigationMenuItemType interface, and the edit_callback. jsp view which provides the UI to add/edit navigation menu items of this type.

CallbackSiteNavigationMenuItemType has 4 main methods(getRegularURL, getTitle, renderAddPage, and renderEditPage), the first of them define the URL the specific menu item leads to. In the getRegularURL method, we create a PortletURL to perform a notification action with the userId from the properties of the SiteNavigationMenuItem instance and a number to callback to from the JS prompt function result:

@Override public String getRegularURL( HttpServletRequest request, SiteNavigationMenuItem siteNavigationMenuItem) { UnicodeProperties properties = new UnicodeProperties(true); properties.fastLoad(siteNavigationMenuItem.getTypeSettings()); PortletURL portletURL = PortletURLFactoryUtil.create( request, CallbackPortletKeys.CALLBACK, PortletRequest.ACTION_PHASE); portletURL.setParameter( ActionRequest.ACTION_NAME, "/callback/notify_user"); portletURL.setParameter("userId", properties.getProperty("userId")); String numberParamName = _portal.getPortletNamespace(CallbackPortletKeys.CALLBACK) + "number"; StringBundler sb = new StringBundler(7); sb.append("javascript: var number = prompt('"); sb.append("Please specify a number to call back','');"); sb.append("var url = Liferay.Util.addParams({"); sb.append(numberParamName); sb.append(": number}, '"); sb.append(portletURL.toString()); sb.append("'); submitForm(document.hrefFm, url);"); return sb.toString(); }

The getTitle method defines the title shown to the user in the menu, by default it uses the current SiteNavigationMenuItem name.
In the renderAddPage and renderEditPage methods, we use JSPRenderer to show the appropriate JSP to the user when adding or editing a navigation menu item of this new type.

Our view is also pretty simple:

<%@ include file="/init.jsp" %> <% SiteNavigationMenuItem siteNavigationMenuItem = (SiteNavigationMenuItem)request.getAttribute( SiteNavigationWebKeys.SITE_NAVIGATION_MENU_ITEM); String name = StringPool.BLANK; User user = null; if (siteNavigationMenuItem != null) { UnicodeProperties typeSettingsProperties = new UnicodeProperties(); typeSettingsProperties.fastLoad(siteNavigationMenuItem.getTypeSettings()); name = typeSettingsProperties.get("name"); long userId = GetterUtil.getLong(typeSettingsProperties.get("userId")); user = UserLocalServiceUtil.getUser(userId); } CallbackDisplayContext callbackDisplayContext = new CallbackDisplayContext( renderRequest, renderResponse); %> <aui:input label="name" maxlength='<%= ModelHintsUtil.getMaxLength( SiteNavigationMenuItem.class.getName(), "name") %>' name="TypeSettingsProperties--name--" placeholder="name" value="<%= name %>"> <aui:validator name="required" /> </aui:input> <aui:input name="TypeSettingsProperties--userId--" type="hidden" value="<%= (user != null) ? user.getUserId() : 0 %>" /> <aui:input disabled="<%= true %>" label="User to notify" name="TypeSettingsProperties--userName--" placeholder="user-name" value="<%= (user != null) ? user.getFullName() : StringPool.BLANK %>" /> <aui:button id="chooseUser" value="choose" /> <aui:script use="aui-base,liferay-item-selector-dialog"> A.one('#<portlet:namespace/>chooseUser').on( 'click', function(event) { var itemSelectorDialog = new A.LiferayItemSelectorDialog( { eventName: '<%= callbackDisplayContext.getEventName() %>', on: { selectedItemChange: function(event) { if (event.newVal && event.newVal.length > 0) { var user = event.newVal[0]; A.one('#<portlet:namespace/>userId').val(user.id); A.one('#<portlet:namespace/>userName').val(user.name); } } }, 'strings.add': 'Done', title: 'Choose User', url: '<%= callbackDisplayContext.getItemSelectorURL() %>' } ); itemSelectorDialog.open(); }); </aui:script>

We use TypeSettingsProperties-prefixed parameters names to save all necessary fields in the SiteNavigationMenuItem instance properties. We need to save two fields - name of the navigation menu item which is used as a title in the menu and the ID of the user we want to notify about a callback request, in a practical case it could be a sales manager/account executive or someone who holds the responsibility for this type of requests. This JSP is using Liferay Item Selector to select the user and the final UI looks like this:

Auxiliary functionality in this module is the notification functionality. We want to send a notification by email and using Liferay internal notification system. In order to do that we need two more components, an MVCActionCommand we used in our getRegularURL method and a notification handler to allow sending this type of notifications. NotifyUserMVCActionCommand has no magic, it accepts userId and number parameters from the request and send a notification to the user using SubscriptionSender component:

@Component( immediate = true, property = { "javax.portlet.name=" + CallbackPortletKeys.CALLBACK, "mvc.command.name=/callback/notify_user" }, service = MVCActionCommand.class ) public class NotifyUserMVCActionCommand extends BaseMVCActionCommand { @Override protected void doProcessAction( ActionRequest actionRequest, ActionResponse actionResponse) throws Exception { String number = ParamUtil.getString(actionRequest, "number"); long userId = ParamUtil.getLong(actionRequest, "userId"); SubscriptionSender subscriptionSender = _getSubscriptionSender( number, userId); subscriptionSender.flushNotificationsAsync(); } private SubscriptionSender _getSubscriptionSender( String number, long userId) throws Exception { User user = _userLocalService.getUser(userId); SubscriptionSender subscriptionSender = new SubscriptionSender(); subscriptionSender.setCompanyId(user.getCompanyId()); subscriptionSender.setSubject("Call back request"); subscriptionSender.setBody("Call back request: " + number); subscriptionSender.setMailId(StringUtil.randomId()); subscriptionSender.setPortletId(CallbackPortletKeys.CALLBACK); subscriptionSender.setEntryTitle("Call bak request: " + number); subscriptionSender.addRuntimeSubscribers( user.getEmailAddress(), user.getFullName()); return subscriptionSender; } @Reference private UserLocalService _userLocalService; }

And to make it possible to send notifications from this particular portlet we need to implement a UserNotificationHandler to allow delivery and to define the body of notifications in this case:

@Component( immediate = true, property = "javax.portlet.name=" + CallbackPortletKeys.CALLBACK, service = UserNotificationHandler.class ) public class CallbackUserNotificationHandler extends BaseModelUserNotificationHandler { public CallbackUserNotificationHandler() { setPortletId(CallbackPortletKeys.CALLBACK); } @Override public boolean isDeliver( long userId, long classNameId, int notificationType, int deliveryType, ServiceContext serviceContext) throws PortalException { return true; } @Override protected String getBody( UserNotificationEvent userNotificationEvent, ServiceContext serviceContext) throws Exception { JSONObject jsonObject = JSONFactoryUtil.createJSONObject( userNotificationEvent.getPayload()); return jsonObject.getString("entryTitle"); } }

After deploying this module we can add the element of our new type to the navigation menu. Unfortunately, the only application display template for the navigation menus that supports Javascript in the URLs is List, so in order to try our Callback request element type, we have to configure Navigation Menu Widget to use List template. Clicking on the new item type we can type a number and request our Call back.

Please keep in mind that user cannot create a notification to himself so it is necessary to log out and perform this action as a Guest user or using any other account. In My Account section we can see the list of notifications:

Full code of this example is available here.

Hope it helps! If you need any help implementing your navigation menu item types  feel free to ask in comments.

Pavel Savinov 2018-10-26T12:47:00Z
Categories: CMS, ECM

Stripe for Credit card payments

CiviCRM - Thu, 10/25/2018 - 17:11

If you've heard of stripe you'll know it's a great platform for accepting credit card payments.  If you haven't heard of it and are reading this then you should try it out: https://stripe.com

Categories: CRM

OSI Incubator Contributes to Success in STEM

Open Source Initiative - Thu, 10/25/2018 - 16:06

Siena College's Urban Scholars Program provides elementary and middle school students in the Albany, New York school district educational opportunities within Science, Technology, Engineering, and Math (STEM) related fields through active, hands-on workshops. Participants work closely with Siena student-mentors, in small groups, that encourage critical thinking, teamwork, and persistence to never give up if something seems too hard.  In 2015 Siena's Urban Scholars adopted the FLOSS Desktops for Kids program, and combined with other activities, has led to astonishing student outcomes: increased interest, and greater success, in STEM courses.

Dr. Michele McColgan and Dr. Robert Colesante of Siena College have released a report on the program and its success, finding:

  1. middle school students that have participated in Urban Scholars have greater pass rates in middle school math and English language arts assessments.
  2. Urban Scholars' students passed New York State Regents Exams at a higher rate (68%) than non-participants (51%),
  3. those completing the program are more than twice as likely to not only continue on in STEM related courses, but also take advanced courses (i.e. AP courses),
  4. participants have higher graduation rates than non-participants for the two years in which data is available (2016-2017).

Congratulations to Drs. McColgan and Colesante, Siena College and of course the Urban Scholars themselves for their great success. The entire report is below.

Urban Scholars Program for the Albany School District - Outcomes from 2013-2017 I. Program Participants

The Siena College Urban Scholars Program (USP) has operated in conjunction with the Albany school district since 2003. It has focused on STEM content and activities since the 2010-11 academic year, and serves approximately 60 under-served, under-represented upper-elementary and middle school students each year. Students are bused to the Siena College campus on 14 Saturdays during the academic year, for a total of up to 70 contact hours per year. Students participate in short morning and afternoon meetings as a group, take two classes, and eat lunch in the college cafeteria. Classes are taught by college STEM faculty, STEM professionals, and undergraduate STEM students.

The Urban Scholars Program has a similar percentage of White and Latinx students, more African American students, and somewhat fewer Asian students than the district (see Figure 1). Participants were
very similar to non-participants in the poverty levels of the neighborhoods in which they live (see Figure 2).

Most students (93%) begin as 5th or 6th graders. Students in 5th - 6th grades comprise 41% of the participants and 7th-8th graders comprise 59% of the participants. Most students (92%) participate for 2 or more years, equating to 28 Saturdays or a approximately 140 contact hours. Of those, 26% participate for 3 years and 21% participate for 4 or more years.

Figure 1. Ethnicity of the students in the district (a) and students in the USP program (b).

 

Figure 2. Poverty level of the students in the district (a) and students in the USP program (b). II. Outcomes

A. Data Collection and Primary Questions
Recently, we began examining impact on New York State (NYS) assessments in English language arts (ELA) and mathematics during their middle school years, which take place while they are participating in
the program. Since many former students have progressed through high school, we also examine NYS Regents exams.

Participants are NOT self-selecting STEM achievers
The NWEA Math Growth score, a metric used by for schools to measure growth over time in math, was recorded for students from 2014-2017. As a baseline and to determine whether the students participating in the program are high achievers or are more motivated as compared to typical students in the school population, the NWEA score was recorded the year before participation and one and two years after the start of participation in the program. There was no difference in NWEA score before student participation and a very slight increase of 0.66 after one year of participation, showing that the students participating in the program were representative of the students in the school. As another potential indicator, final school grades in math courses (8th grade, algebra, geometry) were also analyzed and the participants in the program do not have higher course grades than their peers. Violent and Disruptive Incident Reporting (VADIR) scores were analyzed and there were no differences for participants in the program and a control group. Based on these three results, the students in the program do not self select due to STEM interest or due to higher achievement in STEM courses.

Middle School Standardized ELA and Math Assessments
A comparison of the proficiency of participants in the Urban Scholars program and district students is shown in Figure 3a. Differences comparing participants in the program and non-participants were
significantly higher for both tests (p < 0.0001). 25% of participants passed middle school math assessments while 32% passed their ELA assessments.

High School STEM Course Taking
The topics of the regents courses examined include Living Environments (Biology), Earth Science, Chemistry, Physics, Algebra, Geometry, Algebra 2, Global History, US History, and English Language Arts. The passing rates on the state exams for Urban Scholar participants and district students for test years from 2013 to 2017 are shown in Figure 3b. Participants in the program passed at a higher rate (68%) than non-participants (p < .0001 ).

Figure 3. (a) NYSED ELA and Mathematics proficiency for USP participants (n=329 ELA, n=309 Math) and Albany students (n=9780 ELA, n=9171 Math) and (b) and Regents passing rates for years 2013 to 2017.

Passing rates and exam scores by subject are shown in Figure 4. Participants are passing at higher rates (Fig 4a) and with higher scores (Fig 4b) than non-participants.

Figure 4. (a) NYS Regents Pass rates by subject for USP participants and Albany students and (b) and Regents mean scores for years 2013 to 2017.

The number of students taking advanced STEM courses is increasing, as shown in Figure 5a. This correlates with the percentage of students graduating with advanced designation (Fig 5b).

Figure 5. (a) Number of high school math and science courses taken by USP participants and Albany students and (b) and the percentage of students graduating with advanced designation for years 2013 to 2017.

 

Graduating with Advanced Designation
Graduation information for district students and participants is shown in Table 1. Since students begin the program in 5th or 6th grades, and our analysis begins with them in 2010-11, few have moved through high school to graduation. The data is preliminary from the school district. Still, participants have higher graduation rates than non-participants for the two years with data available (2016-2017).

Table 1. Graduation of District Students and Program Participants for 2016-17 (n=30)

  District Urban Scholars Program  Graduation Rate*  62%  93% - 100%  Regents with adv designation  13%  43%

Image credit: Air National Guard photo by Tech. Sgt. John Hughel, 142nd Fighter Wing Public Affairs/Released. Public Domain, https://www.ang.af.mil/Media/Photos/igphoto/2000886896/

Categories: Open Source

Future-proof your API lifecycle strategy

SnapLogic - Thu, 10/25/2018 - 13:32

Every software professional knows what an application programming interface (API) is, right? It’s a decades-old technology designed to allow data flow between different applications. Today, modern APIs do much more than just enable inter-application data exchange. They are the foundation of new business processes, the lifeblood of customer-centric innovation. APIs speed new business processes into production,[...] Read the full article here.

The post Future-proof your API lifecycle strategy appeared first on SnapLogic.

Categories: ETL

CiviCRM Establishment Committee Elections Update

CiviCRM - Thu, 10/25/2018 - 10:07
Background

The first annual CiviCRM Governance Summit was held in New Jersey, USA on 25/26 September 2018, attended by 29 people - core team, partners, users and service providers from around the globe.

The major decision of the summit was to create a representative body to help guide the CiviCRM project.  An outline of the structure was produced, but there are many details yet to be resolved so the secondary decision was to create an interim committee, called the Establishment Committee, for the purpose of defining the eventual structure.  

Categories: CRM

CiviContribute online Training Session - November 1st

CiviCRM - Wed, 10/24/2018 - 18:56

The Fundamdentals of Contribution Management is a 2-hour online training course designed for new users of CiviCRM and taught by Cividesk.

This training course will cover how to customize CiviContribute to work best for your organization and focus on important features. We demonstrate examples in our demo CiviCRM site so you learn how to add new contributions manually, track online donations, search on contributions and create reports, invoices and thank you letters for donors. 

Categories: CRM

Black Friday &amp; Cyber Monday: 6 tips for successful online sales!

PrestaShop - Wed, 10/24/2018 - 10:47
Mark your calendars! This year, Black Friday is November 23, 2018 and will last all weekend long until Cyber Monday on November 26, 2018.
Categories: E-commerce
Syndicate content