Posted
almost 4 years
ago
by
Neil Griffin
Each year at about this time I try to post the download stats for the Liferay Faces project. I am pleased to report that downloads for the Liferay Faces project hit another all time high back in September of 2020, and held fairly steady until January
... [More]
of this year (2021). Downloads seem to take a dip after the holidays, so perhaps that's why we saw a drop-off in February. Since then, there has been an upward trend again. But that makes three years in-a-row of all-time-highs.
I think that one of the best indicators of usage is reflected in the stats for the Liferay Faces Bridge Implementation artifact, which continues to show a general upward/maintaining trend over the last 12 months:
Many thanks to our faithful JSF portlet developer community for making Liferay Faces such a great success since 2012!
Liferay Faces Download stats for previous years:
May 2020
April 2019
March 2018
[Less]
|
Posted
almost 4 years
ago
by
David H Nebinger
Introduction
Recently I asked some of my teammates for ideas about what to blog about next. Most of the time I take my inspiration from different clients I work with or questions that come up on the Liferay Community channels, but lately the well
... [More]
has seemed kind of dry. So I went fishing for suggestions.
My friend Andrew Betts, who happens to be in the Liferay DXP Cloud (DXPC) team suggested that I write a blog about, wait for it, DXPC. At first it didn't really grab me; I prefer technical issues where I can get my hands dirty, and at first glance I couldn't tell if I was really going to get to be hands-on or not. Eventually I realized that setting up a DXPC environment could really be technical because of everything that's involved, so the seed Andrew planted finally started to take root and grew into the blog post you're reading now.
What is DXPC?
Liferay DXP Cloud is a hosted and managed version of Liferay DXP. As the client you get a lot of control in the environment, it just launches into Liferay's cloud and there is support monitoring the cloud to watch for basic problems. Effectively though it is just like your on-prem DXP solution as there is still the application server, database and Elastic, there's just more polish around the devops aspects of building and deploying. There's other cool features such as automatic DR environments, site to site VPN support to allow connections into your environment, DynaTrace for APM, ...
I'm not here to sell you on DXPC, but I'm going to pick up just after you've decided to move to DXPC.
I actually have a project that is starting DXPC onboarding, so when it was getting organized I thought I'd write this blog post and capture all of the details so I'd have all of the steps correct. However, I was introduced to the project at the point where the client indicated they had selected to launch on DXPC.
But from the point the decision was made, there was over 4 weeks of doing nothing. Well, nothing from my perspective... Actually there are contractual things that both Liferay and the business need to work out, agree to, and eventually sign off on. Also, being late spring/early summer coming out of Covid-19, key people were out on vacation and this more than anything else helped to impede forward progress.
This blog leaves out all of these business and contractual details mostly because I'm not involved in that aspect. It can be completed in less than 4 weeks time, but I honestly cannot give a reasonable estimate of how long it may take for your business. I guess I would suggest to have the key people available to complete the required activities, plus try to keep the ball in Liferay's course to help get through this process in a timely manner.
What happens first?
First thing that Liferay will be doing is provisioning your environments. You'll get a form to fill out that will resemble something like the form below:
There's a bunch of key fields here that need to be defined in order to provision your DXPC environments:
Organization Name: This will be your company name.
Project ID: This is the name for your DXPC environment that you'll be seeing in the DXPC Console, so pick a value here that will help distinguish between environments when you have multiple environments.
Primary & DR Regions: These are the regions where your primary environment and DR environment lives (if you have opted for the DXPC DR option). Select a region that is close to your primary audience and, of course, don't pick the same region for both Primary and DR environments.
Admin Account(s): Here you get to enter 1 or 2 accounts which will be your DXPC administration accounts. Note that this is not for administration of your Liferay environments, this is just identifying users which have administrative access to the DXPC console, so it is the user(s) which can start/stop environments, perform deployments, etc. Best practice is to have 2 admin users so there is always a backup in case the primary administrator has a problem. Note that both users need to provide their Github usernames as these users will be able to access the DXPC workspace repository that will be created in Github.
If you are working with Liferay Global Services, I'd recommend setting [one of] the consultant(s) as an administrator for the duration of your engagement. This will allow your GS team to build and deploy to your DXPC environments as necessary.
Provisioning
After Liferay receives this information, it is passed on to the Liferay DXPC Provisioning Team. This team will be creating a number of new assets for you:
A Liferay DXPC Workspace - I've mentioned the Liferay Workspace many times before, and the DXPC Workspace contains that and more. It includes component configurations for Nginx (for the web server), Liferay/Tomcat (for the application server), MySQL (for the database), and ElasticSearch (for the search support) Docker images. The DXPC Workspace also includes a Liferay Workspace for managing all of your Liferay customizations.
A Jenkins account for CI/CD and managing your builds.
A Liferay DXP Cloud console account for managing your builds and deployment, view your logs, manage your services and backups.
[Optional] A Dynatrace environment for monitoring your DXPC systems.
The provisioning process can take 2-5 days to complete before you will have access to your new DXPC assets.
At the end of the provisioning process, you'll receive 3 emails about your new assets, but the most important one will have the subject line of "Your Name, your DXP Cloud project (project name) is ready". Here's a version in it's tiny glory:
The email basically contains 4 important sections (and 3 informational sections) for accessing your new environments. I'm going to go from the bottom up because, as a developer, this will likely be the order in which you access the environments.
The absolute bottom two sections have links to support and Liferay University videos to introduce the DXPC environment to you.
The next one is your "infrastructure" environment link, which contains basically your Jenkins instance. This Jenkins instance is pre-configured with jobs to automagically rebuild your DXPC Docker Images from the Github repository for the Main and Develop branches. As you commit to or merge PRs to either of these key branches in your repository, Jenkins will take care of completing the build and creating all of the images needed for your DXPC environments. Note that the job only builds the images, it is not going to automagically deploy them to DXPC for you.
The next one up from the bottom is the link to your new Github repository. This repository is automatically created for you in the "dxpcloud" organization as a way to transfer the DXPC files to you. You must move the repo to your own gitlab or bitbucket repo or stay on Github but use your own organization, and you have to complete this within the first 10-14 days after your environment has been provisioned. The DXPC support team can help you update your environment for the new external repo.
There's a note in this section saying if you don't get the invitation email to join your github repo, you should send an email to [email protected], and this is actually the only section that counsels you to do this. My only guess here is that if any part of the provisioning has failed, it is likely going to be on this particular step. If it happens to you, don't fret about it. Just send an email to the provisioning team and they'll fix you right up.
Next section up are links to the non-production environments, typically DEV and UAT. You can't really click on them at this point because nothing has been actually deployed yet. One thing to note is that the non-prod environments are all protected by Basic Auth at the Nginx level; the credentials you find in this section of the email are only for getting into the non-prod environments. Now you might be asking "Why hide Liferay behind Basic Auth?" Well, Liferay is only going to give you a default environ (when you deploy), so the standard [email protected] credentials will be your Liferay admin account. The Basic Auth prevents someone with knowledge of the default Liferay credentials from discovering your non-prod environments and logging in as an administrator to wreak a little havoc. So me, I'd leave them in place, but you can disable if you want to. I'll share how to do that a little farther down the page...
The next section up in the email is the link to access your DXPC Admin Console. Note that this is completely different from a Liferay admin console, so don't confuse the two. The DXPC admin console is where you go to view logs, (re)deploy environments/updates, check systems components status, ... Basically every activity you do to manage your DXPC environment is going to start from the DXPC Admin Console.
An important aspect to note here, your DXPC administrators can be completely different from your Liferay administrators. Your DXPC admins are going to be your operations team, the ones that monitor systems, perform deployments, reboot servers, etc. That's completely different from your Liferay administrators who are managing sites and maybe even users and content. Can these two different types of administrators be the same people? Sure, but they don't have to be if you don't want them to be.
The final top section is the "Accept Our Email Invitations" block. Liferay will be sending you separate emails for each environment that was created for you (DEV, UAT, PROD, maybe a DR, Infrastructure, etc). If you don't get these emails, check your spam folder (and if you find them there, take a moment to whitelist the sender so future DXPC emails get delivered correctly to your inbox).
Verification and Setup
So Liferay has just dropped all of these new toys off for you to play with, but where do you start?
Remember that we must copy our Github repo out of the Github "dxpcloud" workspace within the first 10-14 days, and we should do this as soon as possible. Either Gitlab, Bitbucket or your own Github organization repo is fine, and the DXPC support team will be happy to help us fix things up once we've finished the move. The rest of this blog assumes this step has been completed.
Me, I always want to allocate time to verify that everything is working correctly and even get my initial environments created.
Create the environments? Hasn't Liferay already done that? No, not really. The DXP Cloud team has provisioned the environments for you, but as of yet there are no databases, no application servers, no ElasticSearch servers, ... So yeah, part of our verification process is going to be to get these initial services created for the first time.
The first thing you need to do is get access to the Github repo. With this in place, you want to figure out how you're going to be updating the repo. Here at Liferay Global Services, we typically will fork this repo to give us a private place to work and, as we complete tasks, we'll send PRs to this repo for merging. This helps each developer work in their own environment, free of merge conflicts during development activities, and forced merger responsibilities when prepping the PR for submission. It works really well for us, and we encourage clients to follow the same pattern. It is just a git repo, so you are free to manage it any way you want and any way you are used to.
Ultimately though you need to clone this repo (or your fork if you are using our suggestion) to your local system as this is where all of your environment configuration stems from.
DXP Cloud Workspace
This new repository you have just cloned contains what we refer to as the DXPC Workspace. The workspace has configuration for each one of the components of a typical Liferay DXP environment - database, search, Liferay and web servers, plus goodies for backups and CI handling.
Here's the basic folder structure that you'll get in your new repository:
Each of the root components has a special configuration file, the LCP.json file. This is a JSON file which contains configuration details specific to each component. When Jenkins is building out the environments, the details from the LCP.json files will be used as the primary definition for each service component. Some of the content you'll see in the LCP.json file will be repeated across each service, and some is unique to the specific service.
Here, for example, is a snippet from the LCP.json file from the database folder:
{
"kind": "Deployment",
"id": "database",
"image": "liferaycloud/database:4.2.2",
"memory": 1024,
"cpu": 2,
"scale": 1,
"ports": [
{
"port": 3306,
"external": false
},
...
Some of this may not make any sense, and most of the time you won't need to tamper with the file contents at all because it will contain values previously agreed upon from the contracts. Some of it you may need to change at some point (i.e. if Liferay provides a new image version for the database, you might need to change the image version here), but from the initial provisioning standpoint you should have reasonable starting values.
Each of the components has a configs directory and environment-based subdirectories; I've only expanded this portion for the backup component in the image above, but you will find this same structure on all of the components. As developers we might be familiar with building an artifact specifically for a target environment such as DEV or PROD, or we have also seen where we have a single artifact but environmental configuration is external to the artifact so we can use the same artifact for DEV and PROD, but configuration is external.
The DXP Cloud workspace handles things a little differently. The configuration for all environments are part of the build, but environment variables at run time will have the docker environment use the appropriate configuration set. So the database component, for example, will build into your database image, and this image is used to populate your DEV, UAT and PROD DXPC environments, but each environment will start the database with a different environment setting, so when you start DEV it will start the database service and use any of the configs/dev specific configuration. The configs/common directory is special in that it is where you provide configuration that applies to all environments.
This can be a little hard to get used to, but pretty soon it will make sense and the good news is that Liferay does this consistently across each of the service components, so you don't have to learn a new way of configuration each time.
The Liferay Workspace
The one folder I didn't expand in the listing above is the liferay component. I didn't do this because if you're already a Liferay developer, you already know what this folder contains - a typical Liferay Workspace (so all of my other blogs about the workspace and how you can use it and the features it has, etc., all of those still apply to the Liferay Workspace that is part of the DXPC Workspace).
The one important addition here that I want to highlight is the LCP.json file. This file is probably the one you're going to be changing the most often because this one controls the cluster and individual node sizing and other important OS-level settings. As this is such an important file, I'm going to include the starting one that you'll be given:
{
"kind": "Deployment",
"id": "liferay",
"image": "liferaycloud/liferay-dxp:7.3-4.2.1",
"memory": 8192,
"cpu": 8,
"scale": 1,
"ports": [
{
"port": 8080,
"external": false
}
],
"readinessProbe": {
"httpGet": {
"path": "/c/portal/layout",
"port": 8080
},
"initialDelaySeconds": 120,
"periodSeconds": 15,
"timeoutSeconds": 5,
"failureThreshold": 3,
"successThreshold": 1
},
"livenessProbe": {
"httpGet": {
"path": "/c/portal/layout",
"port": 8080
},
"initialDelaySeconds": 480,
"periodSeconds": 60,
"timeoutSeconds": 5,
"failureThreshold": 3,
"successThreshold": 1
},
"publishNotReadyAddressesForCluster": false,
"env": {
"LCP_PROJECT_LIFERAY_CLUSTER_ENABLED": "true",
"LIFERAY_JVM_OPTS": "-Xms2048m -Xmx6144m"
},
"dependencies": [
"database",
"search"
],
"volumes": {
"data": "/opt/liferay/data"
},
"environments": {
"infra": {
"deploy": false
},
"prd": {
"cpu": 12,
"memory": 16384,
"scale": 2,
"env": {
"LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
}
},
"uat": {
"cpu": 12,
"memory": 16384,
"scale": 2,
"env": {
"LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
}
},
"dr": {
"cpu": 12,
"memory": 16384,
"scale": 2,
"env": {
"LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
}
}
}
}
There's a lot going on here, right? So let's pick out some of the important parts:
We start with the default system configuration. Here we can see that the default is an 8g system w/ 8 CPU but only a single server, and we also declare that the only port on the service will be port 8080 and that it is not publicly available (since external is false).
Next is the definition of the "readiness probe" and "liveliness probe". This is what the DXPC monitoring will use to verify that the environment is ready and able to serve traffic.
The env section is going to be very important, this is where we can define environment variables that will be set within the OS. So we can see the LIFERAY_JVM_OPTS environment variable being set, that would be passed into the runtime container and, when Liferay/Tomcat is started, will be used as the JVM options to start the instance. So for our 8g system, we're going to let Liferay use 6g of that space. We can use the env section to define additional environment variables that we want to pass into the image. Liferay allows an alternative to using a portal-ext.properties file for properties overrides, you can also use environment variables following a specific naming format to set the properties (you can find the right environment variable name for each portal property by checking the portal.properties file in the Liferay source), so anything you could set in portal-ext.properties you could instead set in the environment variables.
The dependencies section lists the services that the Liferay container depends upon, namely the database and search (Elastic). Volumes defines the shared external volumes and in this case it is the Liferay data volume.
The environments section has environment-specific override values. The infra environment is your Jenkins service, so deploy is set to false so you don't get a Liferay/Tomcat server in your infrastructure environment.
PROD, UAT and DR will be pre-populated with values from your contract, so in this example we opted for a 16g 12 CPU cluster of 2 nodes in each of these environments, so we override the defaults with the right values.
One verification step I like to do here is to ensure that the memory value and the JVM memory settings are aligned. Here for example the system is 16g but only 12g is allocated to Liferay. This is going to leave 4g for the OS and any other services I need to run there, and I might feel that 4g is really wasteful so perhaps the JVM should be bumped up to 14g instead of only 12g. Your call, but stay below the memory value and be sure to leave room for the OS runtime.
If there is a change I make to this file, it will most often be to the "scale" property. There are times where you will want to force launching a single instance only, not the full cluster. When you need to do this, set the scale to "1" (you'll need to commit to the main repo, wait for Jenkins to complete the build, go into your DXPC Console and deploy the new build, but it will be limited to a single node). Remember that if you do change this to 1, when you're done restore it back to the original value or you'll continue to have only 1 node running...
First Time Setup
So after we've verified that all of our files are there, checked our LCP.json files and found them to be in line with our contracts and expectations, we're ready make our initial changes in preparation for our environment creation.
These are the things that I'm going to [possibly] do in the DXPC Workspace:
Remember how I said you could disable the Basic Auth settings for the non-prod environments? If you want to do this, you're going to go to the webserver/configs/env/conf.d/liferay.conf and comment out or remove the lines with the auth_basic prefix. If you want to keep the basic auth configuration but simplify the password, you can point to a different file that has the value(s) you want to use. Follow these steps replacing env with the environment that you want to change, dev, uat, ...
In the Liferay Workspace, I'm going to go to the configs/common folder and create my portal-ext.properties, and I'm going to use https://liferay.dev/blogs/-/blogs/professional-liferay-deployment#properties as my starting point. I want to define the most correct and complete portal-ext.properties before the first launch. I will typically only do the common portal-ext.properties file and then handle environment-specific overrides in the LCP.json env area. I especially want to set the default admin password to not be "test" so anyone who stumbles upon my Liferay environment will not be able to log in as an admin (I'll change it again in the UI later on so the password in the file is only temporary, but it is a security aspect I feel is important).
I actually spend a lot more time on my portal-ext.properties file before first launch than most others do; I personally feel that getting these values right the first time means I won't have old data or invalid configuration in my initial instance and is a better foundation for building my Liferay solution than starting with a basically empty properties file and tweaking later.
While I'm in the LCP.json file of the liferay component, I'm going to set the scale to 1 on all of the environments. If you try to launch with 2 or more Liferay nodes, each will get to the new, empty database at the same time and will try to create all of the initial Liferay tables, and if you check the logs for each node they'll report messages like "Duplicate table Xxx..." sort of failures. For the very first time the DB is created, I want to restrict the startup to just 1 node so the cluster isn't trying to create the database at the same time. After my environments are created, then I'll change the LCP.json file back to the right cluster size, but for first launch you really can't beat just setting the scale to 1.
I'm also going to check out the deployment checklist for the version of Liferay I'm using so I get recommendations it has except for the JVM parameters. The DXPC image actually already incorporates the CATALINA_OPTS for you based off of the deployment checklist recommendations. You can, of course, override or replace them all if you need to, but it is a good starting set of JVM parameters (after I'm all done with the build out and prior to go-live, I'll do some load testing, profiling and tuning of the JVM parameters, but following the deployment checklist I'll have a pretty decent starting point).
What's Missing? If you have done a standard Liferay DXP before, you might have noticed that there is no Activation Key / License to worry about. That's normal for DXPC, everything is provisioned from the contract agreement so the build process will automagically inject a license for you, you don't need to get them separately from Liferay Support.
When I get all of these changes done, I'm going to commit and push them to my repository. If I'm using forks like Liferay recommends, I'll send my PR to the main fork and get it merged into the main branch.
I'll then check out my Jenkins and verify it is able to build my whole DXPC workspace; I want to see some success here before I try to deploy the environments. If I do face issues here, I'm going to resolve them before getting to the next step...
DXPC Admin Console
When the build is done, our next step is to move over to the DXPC Admin Console. When you first log in, you'll see a view like:
When you first land on the console, every environment will appear like the 2nd item here does, they'll all say "no services" next to them because, even though the DXPC team has provisioned our environments, nothing has been populated into them.
Starting from the DEV environment, we're going to click into an environment which will show us the detail page:
From here we can click on the Builds link on the upper toolbar towards the right side:
Your list of course will be different and, if we're following the process I've been laying out, we would only have one build available to us. We'll click on the pea-pod menu on the right of the build that we want to deploy:
We'll then pick "Deploy build to..." to move to the actual deployment:
Here we need to select the environment we want to deploy to. We'll start with DEV, but eventually we'll hit them all. After selecting DEV, we can click the "Deploy Build" button to start deploying out the environment.
At this point, all of our system components are going to be created per our DXPC Workspace, the LCP.json configurations, environment configurations and the Docker images that Jenkins had created for us. So it will create the backup, database, Liferay/tomcat, search and webserver (Nginx) component services. We'll see on the status page how all of the services will be listed and, as the startup completes, will change from the gray dancing dots over to a pretty, green Ready label.
All of the status indicators are reliable except for the Liferay service. It will always show the green Ready label before it is actually done starting the portal.
If we click on the liferay service, we can actually see the log messages from Liferay:
We can also use this to get to the Linux command line (Shell), some basic metrics, see (and change) environment variables and also check the custom domains.
When the environment is up and ready, we should also review the Network page (available from the hamburger menu):
The key parts here are the Ingress endpoint and the Address list.
The ingress load balancer IP is the address that you use for forwarding your DNS... So if you own www.example.com and you're hosting it on DXPC and you're given the IP address of 34.1.2.3, you will configure your www.example.com to resolve to the 34.1.2.3 IP address. This is obviously a simple case, as your own network will likely want to direct different routes to different hosts and you'll have to work out a route-based redirect, but hopefully this gives you the info to use your ingress address correctly. As this is my dev server, I would likely want to use dev.example.com for my domain, so I'd have to handle routing that name over to 34.1.2.3.
The addresses show those ports which are open externally and the name to access the service. Primarily you're going to look at the webserver name because that will get to your open Nginx service and route traffic internally (bypassing the ingress load balancer). The non-prod links that I pointed out in the provisioning email near the beginning of this blog post point to the ingress load balancers for the non-prod environments, so those are typically the one(s) that you'd use to access the non-prod environs.
Back on topic to our First Time Setup and using the DXPC Admin console...
So at this point we have finished creating the DEV servers, we've deployed the bundle we built in Jenkins, everything has started, and we've checked the network endpoints to review the details.
The final two things we want to do are:
Check the logs (especially the Liferay logs) and verify there were no obvious failures. You might not have any obvious failures, but if you do, go to the provisioning email and at the bottom you'll find the link to open a Liferay Support ticket. In the environments I was setting up when capturing these screen prints, one of my database services in one environment failed to be created. It was nothing I had done, my DXPC Workspace was clean and build was good, there was just a problem in DXPC creating my database. I opened a support ticket on it, it got assigned to a DXPC support person and they helped resolve the problem quickly. I had to delete my database and then redeploy to get it to be created, but they helped me through these steps so it ended up not being a big deal at all.
Actually log into the environment. If using Basic Auth, use the credentials provided in the provisioning email to get to Liferay, then use the Liferay admin credentials (that hopefully you defined in your portal-ext.properties so it is not [email protected]/test) and verify that it looks like a functioning yet vanilla Liferay server.
When we complete these two tasks, we can say that the DEV environment is good to go, we've verified everything is working.
Why did we do all of this pretty much out of the gate? Well the DXPC team has recently finished provisioning our environments. Should something have gone wrong with the setup, it will still be fresh in their minds and it should make it easier for them to help as necessary. Plus they're going to want to know that we've been able to get started (like a waiter coming to see how you're enjoying the meal after you've taken a couple of bites) so we'll be able to answer affirmatively.
But, now that DEV is done, we want to repeat this step for all of our other allocated environments. Do each of them, one at a time, do the deploy and the startup and the network and the testing, make sure the environment is good to go, then move onto the next one.
And yes, even do this to PROD. Sure it will only be a vanilla Liferay DXP, but our goal at this point is not really to test the customizations that we're going to be working on and eventually deploying to PROD, our goal here is to test all of the processes, to verify that we can deploy to every environment, including PROD, and that all of the pieces start cleanly and serve up even the vanilla traffic.
After we've finished PROD, I'm going to go back to my DXPC Workspace, into the liferay/LCP.json file and restore the scale for my clustered environment, going to push that to the repo, verify Jenkins did the build, then I'm going to deploy the new cluster image to my multi-server environments. The database in each will have been properly created by the single node we were using before, so we don't have to worry about all of the cluster nodes trying to create the database at the same time. Once the cluster is up, we can verify the cluster is working properly by checking the logs for jgroups messages showing the cluster is well-formed.
Local Testing
You can test out your entire DXPC environment locally if you have Docker installed. Just download the docker-compose configuration from https://github.com/LiferayCloud/stack-upgrade/blob/develop/docker-compose.yml and put it in the root of your Liferay DXPC Workspace folder. Then you can use commands like docker-compose up and docker-compose down to launch everything. It will basically leverage your images and bring everything up leveraging the local configurations (so liferay/configs/local, for example).
Conclusion
So this has been a long blog, certainly, but I started out wanting to show how to get started with Liferay DXP Cloud, and I feel like I've done just that - only covered "getting started". There really is a heck of a lot more for you to pick up and learn in your new environment such as how to complete backups and restores, how to apply fixpacks/hotfixes or, better yet, how to update your Liferay (and others) images to later versions, Disaster Recovery (if you've opted for that), ...
I feel like I've barely scratched the surface!
Once you get through the initial verification and setup and get into a good rhythm, you'll eventually see that development/deployment in DXPC is basically a repeat of the following steps:
Write code, push to repo.
Log into DXPC Console, deploy build to environment(s).
Really, that's it in a nutshell. When you can embrace it fully, it is just so elegant to get from a commit to fully deployed in basically a few mouse clicks...
Anyway, I hope you find this blog useful. I'll probably have some updates in a couple of days when my friends on the DXPC team read what I've written and start pointing out all of the things I got wrong. If you want some more info on DXPC or you would like a hook up with someone that can give you a demo or even a sales pitch, leave a comment below or even better hit me up on the Community Slack channels. I'll be able to either answer what you want to know or put you in touch with someone who can.
Before I go, a note to clients who are currently using Liferay DXP on-premises and would prefer to move to this DXP Cloud goodness, not only can I help connect you with someone to get you the DXPC details, but I can also share info on a Migration package that Liferay Global Services can provide to basically help you migrate your on-prem Liferay DXP environment straight to DXP Cloud, so we can get you there and you'd barely need to lift a finger. Well, you'll probably have to do more than lift a finger, but you wouldn't have to do the migration on your own. [Less]
|
Posted
almost 4 years
ago
by
Olaf Kock
If you’ve ever looked at a Liferay workflow implementation and its scripts, you might have seen workflowContext being referenced in the scripts that are executed in the individual tasks and states.
I’ve recently had my first scripting contact with
... [More]
Workflow, and wanted to look at this context, and what it can do for me. Digging a bit, you’ll find out that workflowContext is a Map - interesting: Serializable hints at it being available in later steps of the workflow again, when filled in the beginning. And indeed, that’s the case.
Here’s a very simple example for how it can be useful:
First of all: The dynamic portion of a workflow is a mixture of Freemarker (for notifications) and Groovy (for scripts).
The default Single Approver Workflow sends mail without any subject. That’s ugly, but can be changed easily: Notifications have a Description and a Template. Their description turns into the email subject - and by default it’s empty. So you’ll just need to fill it: Static text is fine (“Please review a workflow submission”), but you can do better with workflowContext:
Before making the email more personal, we’ll have to go into scripting: Look at Single Approver’s initial state, “created”: It doesn’t have any action, but you can add one. Let’s make it extremely simple and just cater for JournalArticle (that's a Web Content Article on the UI - other types are left as an API exercise for the reader): onExit, enter this script:
import com.liferay.asset.kernel.model.AssetRenderer;
import com.liferay.portal.kernel.util.GetterUtil;
import com.liferay.portal.kernel.workflow.WorkflowConstants;
import com.liferay.portal.kernel.workflow.WorkflowHandler;
import com.liferay.portal.kernel.workflow.WorkflowHandlerRegistryUtil;
import com.liferay.portal.kernel.workflow.WorkflowStatusManagerUtil;
long classPK = GetterUtil.getLong((String)workflowContext.get(WorkflowConstants.CONTEXT_ENTRY_CLASS_PK));
String className = (String)workflowContext.get(WorkflowConstants.CONTEXT_ENTRY_CLASS_NAME);
WorkflowHandler workflowHandler = WorkflowHandlerRegistryUtil.getWorkflowHandler(className);
AssetRenderer assetRenderer = workflowHandler.getAssetRenderer(classPK);
String assetTitle = "none";
try {
assetTitle = assetRenderer.getAssetObject().getTitle();
} catch ( java.lang.Exception e) {
// ignore. Note: Above code works for JournalArticle, but
// not every asset has a getTitle method. Those will fail,
// but we ignore this in the quick sample here.
}
workflowContext.put("assetTitle", assetTitle);
WorkflowStatusManagerUtil.updateStatus(WorkflowConstants.getLabelStatus("pending"), workflowContext);
The magic words are the last two lines: From now on, any time in the workflow, we can reference workflowContext.get(“assetTitle”) in scripts, or ${assetTitle} in Freemarker-enabled fields.
Go ahead and change the description of this workflow’s “Review Notification” to “please review ${assetTitle}” and provide your reviewers with more meaningful notifications.
Extend this with anything you want to store in the workflowContext. Well, not anything - don't overdo it. But it can simplify your other workflow scripts tremendously, and provide personalized and meaningful notifications to your customers.
Need some icing on the cake?
Description is a single line and turns into the subject of an E-Mail, while Template can be multi-line and will be the body of the email (e.g. insert or other HTML markup). However, if you’re creating a User Notification, Template will be single line, with HTML tags escaped, and the only content shown (no Description) - it will be the title of the UI Notification. You might want to split up the current single notification into two, to cater for each of the channels individually.
[Less]
|
Posted
almost 4 years
ago
by
Prasanna Katti
After doing multiple rounds of search over the internet about using IN parameter in Custom SQL in Liferay but without any satisfactory results, I assumed probably this could never be achieved in Liferay. However a few days back when I was looking at
... [More]
the code I thought about a tweak that could be applied if we have a use case that pertains to the mentioned scenario. I applied the same and lo! It worked. Hence sharing with my fellow developers a brief description of the steps that I followed to implement IN operator/clause in my custom SQL query in Liferay.
Step 1:
Create Service Builder
Here I have created book management as a service module with bookId, bookname and author as fields. Other fields are left as default.
Service.xml
Build Service. After building the service entities are generated.
Step 2
Create default.xml
While working with Custom SQL, default.xml is an important file as it contains the SQL query that is to be executed to fetch the result set from server. It is created under the path META-INF/custom-sql folder in service module. In my case I have named the service builder project as book-management. Therefore the path for default.xml will be book-management-service/src/main/resources/META-INF/custom-sql/default.xml as illustrated below :
Paste the contents inside default.xml as below
Run build service and click gradle refresh for entities to be generated.
Step 3
Create EntityFinderImpl
Next step is to create an EntityFinderImpl class. In my case I have named entity as book, hence the entity finder class would be BookFinderImpl.java class. This class should be created in com.sample.book.service.persistence.impl package.
Here the BookFinderImpl class should extend BookPeristenceImpl class as illustrated below
Next run service builder and click gradle refresh. Post building the service we can observe that there are additional entities that are created inside com.sample.book.service.persistence.impl class.
Now change the BookFinderImpl to extend BookFinderBaseImpl and implement BookFinder interface. Next add the component reference for service class @Component(service = BookFinder.class) so that the BookFinderImpl becomes available as a service.
Build service.
Step 4
Create method in EntityFinderImpl
Now create a method inside BookFinderImpl class as illustrated below
Here as we can see I have firstly injected the reference of Custom SQL entity. For the Custom SQL reference to be available in service module add the following in the build.gradle file
compileOnly group: "com.liferay", name: "com.liferay.portal.dao.orm.custom.sql.api", version: "1.0.0"
Next I have created a method that returns List of objects as return type of the method. Here I have created session object to handle session parameters for the query. Next I have created a String object that will store the query by fetching the SQL id from default.xml.
Further I have created a StringBuffer object that will append the IN parameter to the SQL query. The values for parameter is set as DTO bean entities and accessed using getter methods. If there are any positional parameters in our SQL query they can be accessed by creating QueryPos objects and passing appropriate values to the parameters.
Run build service and click on gradle refresh
Step 5
Create a method in EntityLocalServiceImpl class
Next create a method in BookLocalServiceImpl class that will access the method created in FinderImpl as illustrated below. Pass the required parameters to the method so that it can be accessed when calling the service module.
This is the final step in our Custom SQL configuration. Now the method created in LocalserviceImpl can be accessed from any controller class that consumes the service APIs to generate result set when used with IN parameter in SQL query. [Less]
|
Posted
almost 4 years
ago
by
Jamie Sammons
The Liferay Portal 7.3 CE GA8 release is primarily focused on fixes so no new notable features will be introduced with this release.
Download options
Choose the best download option suited for your environment below.
Docker image
To use Liferay
... [More]
Portal 7.3 CE GA8 in docker, run:
docker run -it -p 8080:8080 liferay/portal:7.3.7-ga8
For more information on using the official Liferay docker image see the liferay/portal repo on Docker Hub.
Bundles and other download options
To download the Liferay Portal 7.3 CE GA8 release and any additional files (for example, the source code, or dependency libraries), visit the release page.
Dependency Management
If you are developing on top of Liferay Platform using Liferay Workspace, you will only need to define a single dependency artifact by adding the following line to each modules build.gradle file:
dependencies {
compileOnly group: "com.liferay.portal", name: "release.portal.api"
}
By setting a product info key property it will be possible to update all dependencies to a new version by updating the liferay.workspace.product property in the liferay workspace projects gradle.property file:
liferay.workspace.product = portal-7.3-ga8
When using an IDE such as Eclipse or IntelliJ all apis are immediately available in autocomplete for immediate use.
Documentation
All documentation for Liferay Portal and Liferay Commerce can now be found on our new documentation site called learn.liferay.com. For more information on our new documentation initiative see the official announcement here.
Compatibility Matrix
Liferay's general policy is to test Liferay Portal and Liferay Commerce against newer major releases of operating systems, open source app servers, browsers, and open source databases (we regularly update the bundled upstream libraries to fix bugs or take advantage of new features in the open source we depend on).
Liferay Portal 7.3 CE and Liferay Commerce 3.0 were tested extensively for use with the following Application/Database Servers:
Application Server
Tomcat 9.0
Wildfly 16.0 (Previously 11.0)
Database
HSQLDB 2 (only for demonstration, development, and testing)
MySQL 5.7, 8.0
MariaDB 10.2
PostgreSQL 11.2 (Previously 10)
JDK
IBM J9 JDK 8
Oracle JDK 8
Oracle JDK 11
All Java Technical Compatibility Kit (TCK) compliant builds of Java 11 and Java 8
Source Code
Source is available as a zip archive on the release page, or on its home on GitHub. If you're interested in contributing, take a look at our contribution page.
Bug Reporting
If you believe you have encountered a bug in the new release you can report your issue by following the bug reporting instructions.
Getting Support
Support is provided by our awesome community. Please visit helping a developer page for more details on how you can receive support.
Fixes and Known Issues
Fixes
List of known issues
[Less]
|
Posted
about 4 years
ago
by
Marcial Calvo Valenzuela
Esta entrada de blog está también disponible en español.One of the most common needs that we face in Liferay Portal / DXP installations is the monitoring, as well as the alert system that alerts us if something does not work as it should (or as
... [More]
expected)Whether it is a web service managed in the cloud, a virtual machine, or a Kubernetes cluster, we need to monitor the entire infrastructure in real time and even have a history. In order to mitigate this need, Grafana was developed.Grafana is a free software based on the Apache 2.0 license, which allows the display and formatting of metrics. With it we can create dashboards and charts from multiple sources. One such font type is Prometheus.Prometheus is an open source software that enables metric collection via HTTP pull. For the extraction of these metrics it is necessary to have exporters to extract the metrics from the desired point. For example, for the Kubernetes cluster metrics we could use kube-static-metrics. In our case, we want to monitor Liferay Portal / DXP, which in an on-premise installation we would do it through the JMX protocol using tools such as JConsole, VisualVM to do it hot or using an APM that extracts and persists this information in order to have of a history of the behavior of the platform. In this case we will use JMX Exporter, through which we will extract the mBeans from the Liferay Portal / DXP JVM so that Grafana can then read them, store them and with them create our dashboards and alerts.We will also use the Kubernetes container Advisor (cAdvisor) to obtain the metrics that the cluster exposes about our containers.Requirements
A Kubernetes cluster
Liferay Portal / DXP deployed on Kubernetes. If you don't have it, you can follow the following blog I did a few months ago to deploy Liferay Portal / DXP on Kubernetes.
Let's start!Configuring Exporter JMX for TomcatThe first thing we will do is extract the metrics by JMX from the Tomcat of our Liferay Portal / DXP. For this we will use the JMX exporter. We will execute this exporter as a javaagent within the JVM Opts of our Liferay Portal / DXP.
Using our Liferay Workspace, we add the jmx folder and in it we include the javaagent .jar and the yaml file to configure the exporter. In this yaml file we are including some patterns to extract the mBeans as metrics: counters to monitor tomcat requests, about the session, servlets, threadpool, database pool (hikari) and ehcache statistics. These patterns are fully adaptable and extensible for any needs that may arise.Next we will need to build our Docker image and push it to our registry to update our Liferay Portal / DXP deployment with our latest image, or else configure our Liferay Portal / DXP image with a configMap and volume to add this configuration.
Using the environment variable LIFERAY_JVM_OPTS available in the Liferay Portal / DXP Docker image, we will include the necessary flag to execute the javaagent. Following the documentation of the JMX exporter we will need to define the port in which we will expose the metrics and the configuration file with our mBeans export patterns. In my case, I have included this environment variable using a configMap where I group all this type of environment variables, but it could be inserted directly into the Liferay Portal / DXP deployment:We will open port 8081 to the container, calling it “monitoring” and we will include the necessary annotation so that Prometheus hooks the metrics to the pods that run this type of container, as well as indicate the port on which it will extract the metrics so that Prometheus hooks only with this port:With this configuration, we will be able to start our first Liferay Portal / DXP pod and on port 8081 we will have an endpoint “/metrics” where the configured metrics are being exposed. If we wanted to query it, we could open a nodePort to port 8081 to access it, but it is not necessary.
Installing PrometheusNow we will install Prometheus in our cluster:
First of all we will need to create the namespace where we will implement Prometheus. We will call it "monitoring".Next, we will include the cluster resources to monitor. We will need to apply the following yaml manifest, with which we will create the role "prometheus" and associate it with the namespace "monitoring". This will apply the "get", "list" and "watch" permissions to the cluster resources listed in "resources". In our case, we will add the pods and k8s nodes. The pods to access the endpoint "/metrics" where our JMX exporter is exposing metrics and the nodes to be able to access the CPU and Memory requested by our Liferay Portal / DXP containers:
We will need to create the following configMap to configure our Prometheus. In it we will indicate the configuration of the job that will scrape the metrics that the Exporter JMX is generating in each Liferay Portal / DXP pod in addition to the job that will extract the metrics from the Kubernetes Advisor container, in order to monitor CPU and Memory of The containers. In it, we also indicate that the interval to read these metrics is 5s:
Now we will deploy Prometheus in our namespace monitoring. First we will create the PVC where Prometheus will store its information and then we will apply the yaml manifest with the Prometheus deployment. In it we include the previous configuration made in step 2 and open port 9090 to the container.
We will then create the Service that will expose our Prometheus instance to access it. In this case we will use a nodePort to access the destination port:
And once deployed ... if we access the IP of our cluster and nodePort 30000 we will see Prometheus:
Accessing targets (http: // : 30000 / targets) we will see the status of our pods configured to export metrics. In our case, a single pod with Liferay Portal / DXP:
From the main Prometheus page, we can insert the queries to extract the desired metrics. These queries must be in PromQL format. To learn a little more about this Query Language, you can read the basics from the official Prometheus website and pick up some examples:
If we look at the output of the query, we can see, in addition to its output value, the origin of the metric: the pod name, namespace, ip, release name (in case we use HELM for its deployment, etc) We can also consult the output of the metric in the form of a graph:
Installing and configuring GrafanaGrafana is the metric analysis platform that allows us to consult and create alerts on the data obtained from a metric source, such as our Prometheus. Grafana's strong point is the creation of very powerful dashboards that are totally tailored and persistent over time, which can also be shared. The creation of alerts through many channels such as email, slack, sms, etc. is also a very powerful point.As we have done with Prometheus, we will need to install Grafana on our k8s Cluster. We will carry out the Grafana deployment in the same namespace monitoring.
We will create the following configMap in the "monitoring" namespace that will help us to apply the necessary configuration to Grafana to connect with Prometheus. Basically what we do in it is create a datasource whose endpoint is the Prometheus service created previously: prometheus-service.monitoring.svc: 8080
We will create the PVC where Grafana will store the information with the following manifest yaml
We will create the Grafana deployment in "monitoring" applying the following manifest. In it we open port 3000 to the container.
We will create the service that Grafana will expose and we open port 3000 through a nodePort to 32000 of our cluster:
Now we can access to our Grafana instance:
Will do login with the default administrator user (admin and password admin) and we can start creating our dashboards:
Grafana is very powerful and highly customizable, so the graphics that you want to create will require some work time to leave them to the taste of the user who wants to exploit them. In the case of a Liferay Portal / DXP installation on k8s, I am monitoring container memory, container cpu usage, JVM memory usage and Garbage Collector time, database pools and connection pool HTTP, plus a dashboard to monitor the percentage of cache misses on Liferay Portal / DXP caches:
It is possible to configure alerts for when a dashboard reaches a desired threshold. For example, when the cache misses percentage is higher than 90% for 5m, we will configure the following alert:
We will be able to monitor the history of the alerts for the dashboard and obtain exact information of the moment in which the threshold was reached to analyze the problem.
ConclusionMonitoring our Liferay Portal / DXP installation on Kubernetes, in a totally customized way, is possible thanks to Prometheus and Grafana. In addition, we will not only have a hot monitoring of the platform, we will also have a history that will help us to analyze its behavior in problematic situations. [Less]
|
Posted
about 4 years
ago
by
Crystal Santos
When we think about having different authentication types we commonly think about one fluid and intuitive screen to guide the user through this process. And why is that? It’s probably because logging in is a mean to an end, as a user is often trying
... [More]
to pass this step to achieve a broader goal. With this in mind, it is very important to make logging in smoothly and easily.A logging in process with too many steps can increase the risk of abandonment - and you probably don’t want this into your website. For example, one User Interface Engineering (UIE) study of an online retailer shows that 75% of e-commerce shoppers have never tried to complete their purchase once they have requested their password. And login wall can also be another barrier to users when they’re navigating through websites.By knowing that, it’s necessary to design a login screen that will make the user experience better for everyone and ensure that no user group is excluded in this step. However, Liferay login module normally adds a few steps to get that done. As this is a very common and important question, I’ve decided to make this post to help you out to get this thing done! As a reminder, this is one way to do this kind of thing. If you have another way, fell free to comment here and I’ll appreciate your suggestions :)Open Id connectionAs you can have more than one Open Id Provider configured in Liferay, this will be the workflow to login in: Which means that you need to click on “OpenId Connect“, choose the provider and then you’ll be redirected to the login page. A lot of steps to log in, right?What if you want this as a button “Login“ and when you click you go to the login page? A pretty easy way to achieve that is using Web Content + Structure + Template. See the details below:
The login module should be placed in some page. As we normally use Liferay built-in login to login as administrator, we can have an /admin page, for example.
Create the structure below:
{ "availableLanguageIds": [ "en_US" ], "defaultLanguageId": "en_US", "fields": [ { "label": { "en_US": "OpenID Provider" }, "predefinedValue": { "en_US": "" }, "style": { "en_US": "" }, "tip": { "en_US": "" }, "dataType": "string", "indexType": "keyword", "localizable": true, "name": "OpenIDProvider", "readOnly": false, "repeatable": false, "required": false, "showLabel": true, "type": "text" }, { "label": { "en_US": "Login Text" }, "predefinedValue": { "en_US": "" }, "style": { "en_US": "" }, "tip": { "en_US": "" }, "dataType": "string", "indexType": "keyword", "localizable": true, "name": "LoginText", "readOnly": false, "repeatable": false, "required": false, "showLabel": true, "type": "text" } ]}It’ll be a simple structure with two fields: OpenId Provider and Login Text:3. Create the template to the structure created in the previous step as below: Of course you can change the layout here, add images or whatever you want. You just need to leave the form code and it’ll work.4. Create the web content indicating the OpenID Connect Provider Name in the first field and the name of the button in the second one, as indicated below:5. Add this web content in a page, and that’s it! This will redirect the user to the provider login page and it’ll follow the Liferay login workflow without problems.Of course you can add the structure as a repeatable field to include all OpenID Providers you have.Social Media connectionsWhat if you want to change the layout on Facebook login in login built-in in Liferay? How can you achieve that? Or what if you want to make it simpler?Having the Facebook URL to login is a little bit complicated in comparison with OpenID. However, it’s not impossible! You can inject the URL in FreeMarker Templates as additional context variables. It’s necessary to create a Java Class which implements the TemplateContextContributor service, as you can see below:import java.util.Map;import javax.portlet.PortletRequest;import javax.portlet.PortletURL;import javax.portlet.WindowStateException;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpSession;import org.osgi.service.component.annotations.Component;import org.osgi.service.component.annotations.Reference;import com.liferay.portal.kernel.facebook.FacebookConnect;import com.liferay.portal.kernel.json.JSONObject;import com.liferay.portal.kernel.json.JSONUtil;import com.liferay.portal.kernel.portlet.LiferayWindowState;import com.liferay.portal.kernel.portlet.PortletURLFactoryUtil;import com.liferay.portal.kernel.servlet.PortalSessionThreadLocal;import com.liferay.portal.kernel.template.TemplateContextContributor;import com.liferay.portal.kernel.theme.ThemeDisplay;import com.liferay.portal.kernel.util.GetterUtil;import com.liferay.portal.kernel.util.HttpUtil;import com.liferay.portal.kernel.util.PropsKeys;import com.liferay.portal.kernel.util.PropsUtil;import com.liferay.portal.kernel.util.PwdGenerator;import com.liferay.portal.kernel.util.Validator;import com.liferay.portal.kernel.util.WebKeys;/** * @author crystalsantos */@Component( immediate = true, property = {"type=" + TemplateContextContributor.TYPE_GLOBAL}, service = TemplateContextContributor.class)public class LoginTemplateContextContributor implements TemplateContextContributor { @Reference private FacebookConnect facebookConnect; @Override @SuppressWarnings("deprecation") public void prepare( Map contextObjects, HttpServletRequest request) { try { ThemeDisplay themeDisplay = (ThemeDisplay)request.getAttribute(WebKeys.THEME_DISPLAY); PortletURL renderUrl = PortletURLFactoryUtil.create(request, "com_liferay_login_web_portlet_LoginPortlet", PortletRequest.RENDER_PHASE); renderUrl.setWindowState(LiferayWindowState.NORMAL); renderUrl.setParameter("mvcRenderCommandName", "/login/login_redirect"); String facebookAuthRedirectURL = facebookConnect.getRedirectURL(themeDisplay.getCompanyId()); String facebookAuthURL = facebookConnect.getAuthURL(themeDisplay.getCompanyId()); String facebookAppId = facebookConnect.getAppId(themeDisplay.getCompanyId()); HttpSession portalSession = PortalSessionThreadLocal.getHttpSession(); String nonce = null; if(Validator.isNotNull(portalSession)) { nonce = (String) portalSession.getAttribute(WebKeys.FACEBOOK_NONCE); if(Validator.isNull(nonce)){ nonce = PwdGenerator.getPassword(GetterUtil.getInteger(PropsUtil.get(PropsKeys.AUTH_TOKEN_LENGTH))); portalSession.setAttribute(WebKeys.FACEBOOK_NONCE, nonce); } } facebookAuthURL = HttpUtil.addParameter(facebookAuthURL, "client_id", facebookAppId); facebookAuthURL = HttpUtil.addParameter(facebookAuthURL, "redirect_uri", facebookAuthRedirectURL); facebookAuthURL = HttpUtil.addParameter(facebookAuthURL, "scope", "email"); facebookAuthURL = HttpUtil.addParameter(facebookAuthURL, "stateNonce", nonce); JSONObject stateJSONObject = JSONUtil.put( "redirect", themeDisplay.getURLHome() ).put( "stateNonce", nonce ); facebookAuthURL = HttpUtil.addParameter(facebookAuthURL, "state", stateJSONObject.toString()); contextObjects.put("facebook_url", facebookAuthURL); } catch (WindowStateException e) { e.printStackTrace(); } }} In the code above, line 85 it’s adding a variable called facebook_url which will be available in any FreeMarker Template. This variable will be the Facebook Login URL with all parameters necessary to work in Liferay.With this, our template will be like this: All togetherBy improving our structure and template it’s possible to achieve a more friendly layout, adding the styles of your website or common design to Social Media, for example. You can see an example below:Updated structure:{ "availableLanguageIds": [ "en_US" ], "defaultLanguageId": "en_US", "fields": [ { "label": { "en_US": "OpenID Provider" }, "predefinedValue": { "en_US": "" }, "style": { "en_US": "" }, "tip": { "en_US": "" }, "dataType": "string", "indexType": "keyword", "localizable": true, "name": "OpenIDProvider", "readOnly": false, "repeatable": false, "required": true, "showLabel": true, "type": "text", "nestedFields": [ { "label": { "en_US": "OpenID Text" }, "predefinedValue": { "en_US": "" }, "style": { "en_US": "" }, "tip": { "en_US": "" }, "dataType": "string", "indexType": "keyword", "localizable": true, "name": "OpenIDText", "readOnly": false, "repeatable": false, "required": true, "showLabel": true, "type": "text" }, { "label": { "en_US": "OpenIDIcon" }, "predefinedValue": { "en_US": "" }, "style": { "en_US": "" }, "tip": { "en_US": "" }, "dataType": "image", "fieldNamespace": "ddm", "indexType": "text", "localizable": true, "name": "OpenIDIcon", "readOnly": false, "repeatable": false, "required": true, "showLabel": true, "type": "ddm-image" } ] }, { "label": { "en_US": "Facebook Text" }, "predefinedValue": { "en_US": "" }, "style": { "en_US": "" }, "tip": { "en_US": "" }, "dataType": "string", "indexType": "keyword", "localizable": true, "name": "FacebookText", "readOnly": false, "repeatable": false, "required": true, "showLabel": true, "type": "text", "nestedFields": [ { "label": { "en_US": "FacebookIcon" }, "predefinedValue": { "en_US": "" }, "style": { "en_US": "" }, "tip": { "en_US": "" }, "dataType": "image", "fieldNamespace": "ddm", "indexType": "text", "localizable": true, "name": "FacebookIcon", "readOnly": false, "repeatable": false, "required": true, "showLabel": true, "type": "ddm-image" } ] } ]} Updated template: Final result achieved:As you can see, in this way it’s possible to configure different types of login in one web content, make it easier to change it, add new login types or remove login types. Moreover, you’ll deliver more power to your content team, which will depend less and less on IT teams.Although you can use the Liferay login without changes, I thought these tips could help your team to be more flexible and independent. Always remember the importance to provide an easy login to your users satisfaction.I hope you can use the power of web content to improve your environment and design your login in a fluid and easy way to your users. And, of course, if you’ve more tips or doubts, please leave them in the comments below! [Less]
|
Posted
about 4 years
ago
by
David H Nebinger
IntroductionAs many folks know, I'm known for telling people not to look inside the Liferay database itself. It's not always clear what is what in the DB and Liferay has a lot of code around retrieving and updating data that you likely will not get
... [More]
completely right if you were to update the DB directly.I typically end with "Always, always use the Liferay API for your data needs..."And, if you're writing code, it is easy to follow this recommendation.But what if you're a system admin or you're trying to diagnose a problem in an environment that you can't just deploy new code to, how can you follow this recommendation?Via the Scripting control panel that is part of System Administration, that's how...What is Groovy?If you have no background with Groovy, let me just introduce it a bit. Groovy is a scripting language, but it is closely tied to Java. In fact, most Java classes could be copied into a Groovy script and they're pretty much ready to run, the syntax is so connected.But Groovy is also more than just Java and has some weak-typing support similar to what JavaScript offers. It has some additional language syntax, unique to Groovy and not part of Java.Groovy is a world practically unto itself. I'm nowhere near a Groovy expert, certainly, I just use it to support my day job so my Groovy knowledge is quite narrow. Whenever I find myself trying to figure out a certain piece of Groovy syntax, I typically end up on the Groovy homepage at https://groovy-lang.org/The important part for me is that a) it is very close to Java in many ways so it is easy for me to switch between them, b) it is interpreted so I don't have to write and deploy code in the server, and c) I can access pretty much the entire Liferay API to do things quickly...ExamplesSo one thing I often find myself needing to do is finding some subset of entities and outputting what I find or doing something with the entity (changing it or evaluating it or ...).While the complete scripts themselves are probably not very reusable, the techniques I use do tend to be reusable in one form or another. So I want to provide some examples that highlight some of things I've done so that you might be able to leverage them in your own scripts.Note that what I'm presenting here is not meant to be the only way to do something. There may be better examples or better ways to accomplish the same things I'm doing here; hopefully if folks have a better way they might add them in the comments below so we can all benefit...Data RetrievalIf anyone asks about any of the static util classes that Liferay has, I'm the first one to step up and say that no one should be using the static utils anymore, we should all be @Reference injecting our services and leave those static util classes in the dust.But then I clarify, saying that, well, they can be used in cases where we don't have a component and therefore can't @Reference something in. Two examples I often use to support this case, the first is the ServiceBuilder entity model classes; they're just instantiated, they don't have their own component context.The second example is this one, Groovy Scripts. It is just still so easy to write a line of Groovy likeUser user = UserLocalServiceUtil.getUserByEmailAddress(
PortalUtil.getDefaultCompanyId(), "[email protected]");So yeah, for your Groovy scripts, feel free to dust off those old legacy static util classes and put them to work. This is one case where it is absolutely appropriate to use them to get to the data that you need.Searching for EntitiesSo one thing I often find myself doing is having to find and iterate over a bunch of entities, often needing specific queries to match on, not simple finder-based retrievals. Since I'm also a big fan of Dynamic Queries (https://liferay.dev/en/b/visiting-dynamicquery and https://liferay.dev/en/b/revisiting-dynamicquery), I'll often use this in my Groovy scripts to get the values I need, such as finding all Users with an example.com email address:DynamicQuery dq = UserLocalServiceUtil.dynamicQuery();
dq.add(RestrictionsFactoryUtil.like("emailAddress", "%@example.com"));
List users = UserLocalServiceUtil.dynamicQuery(dq);
Batch UpdatesSome problems require some bulk entity modifications in order to solve them. I'm not going to go to the database to change them there, no I know that I have to use the APIs to ensure that my index gets updated, etc.So in order to do batch updates, I fall back on another old favorite of mine, the Actionable Dynamic Query (https://liferay.dev/en/b/visiting-dynamicquery#actionableDynamicQuery). Using the ADQ you can batch update your entities while still sticking with the Liferay API to do it correctly.Let's say we needed to change all email addresses from @example.com domains over to @testing.com domains. We can accomplish this with an easy ADQ implementation:ActionableDynamicQuery actionableDynamicQuery =
UserLocalServiceUtil.getActionableDynamicQuery();
actionableDynamicQuery.setAddCriteriaMethod(
new ActionableDynamicQuery.AddCriteriaMethod() {
@Override
public void addCriteria(DynamicQuery dynamicQuery) {
dynamicQuery.add(
RestrictionsFactoryUtil.like("emailAddress", "%@example.com"));
}
});
actionableDynamicQuery.setPerformActionMethod(
new ActionableDynamicQuery.PerformActionMethod() {
@Override
public void performAction(User user) {
String emailAddress = user.getEmailAddress();
int pos = emailAddress.lastIndexOf('@');
String prefix = emailAddress.substring(0, pos);
user.setEmailAddress(prefix + "@testing.com");
UserLocalServiceUtil.updateUser(user);
}
});
actionableDynamicQuery.performActions();
The great part about this is that I really split my concerns. The setAddCriteriaMethod() is responsible for adding the criteria to select the entities that I need to operate on. The completely separate step, the setPerformActionMethod() is responsible for doing something with the matched user. I don't have to worry about if the system has 5 users with the @example.com domain or whether the system has 5 million users, this code will handle it correctly every time.In your perform action method, you don't have to limit yourself to the current entity your processing. Say you were trying to create a map of email addresses to blogs count. This code gets you most of the way because you're querying for the users so you'll have the email address for each user, but in the perform action method you could count for all blogs they've authored using the BlogEntryLocalServiceUtil, and perhaps populate the count you get back into a map with the email address as the key and the count as the value.Locating ServicesNot every Liferay service has a static util class for it, and some services have multiple implementations that are differentiated by a property value, so you might one a particular one out of many possible options.There's two ways you can try it - a Registry lookup and a ServiceTracker.The Registry lookup is pretty easy to understand:Registry registry = RegistryUtil.getRegistry()
GroupLocalService groupLocalService =
registry.getServices(GroupLocalService.class, null)[0];
The syntax here may look a little odd, but it will get the job done.The ServiceTracker (or ServiceTrackerList or ServiceTrackerMap if you'd like) can also get the job done, but they have a particular thing you have to pay attention to that is different from regular Java usage:Bundle bundle = FrameworkUtil.getBundle(
com.liferay.portal.scripting.groovy.internal.GroovyExecutor.class);
ServiceTracker groupServiceTracker =
new ServiceTracker(bundle.getBundleContext(), GroupLocalService.class, null);
groupServiceTracker.open();
GroupLocalService groupLocalService = groupServiceTracker.getService();
// use groupLocalService here...
groupServiceTracker.close();
See how we're using the GroovyExecutor class when we're getting the Bundle? The way the Groovy script is processed, we need to use the class running the script, the GroovyExecutor, as the class to find the bundle for.Here I've used a regular ServiceTracker, but you can also leverage the ServiceTrackerList and ServiceTrackerMap (https://liferay.dev/en/b/service-trackers) when they better suit your needs.ServiceContext, Current HTTP Request/Response, Etc.So sometimes you just need one or more of these things, but how the heck can you get them?The ServiceContextThreadLocal is your friend for this case. This gives up a fairly complete ServiceContext instance and, once you have this, you pretty much have access to whatever you need. Want the current ThemeDisplay? No problem, the ServiceContext getRequest().getAttribute(WebKeys.THEME_DISPLAY); hands it over. There's lots of other goodies in there that you can turn to in a pinch.import com.liferay.portal.kernel.service.*;
ServiceContext ctx = ServiceContextThreadLocal.getServiceContext();
out.println(ctx.getRequest());
out.println(ctx.getRemoteHost());
out.println(ctx.getRequest().getRequestURI());
I included in this script something I've been leaving out of the other samples: the imports. Just like in Java, your Groovy Scripts must import classes that are necessary for the script to run. Without them, you'll get all kinds of fun Unable to resolve class Groovy errors to figure out. Fortunately it will tell you what you're missing, so it is pretty easy to get them squared away.LoggingGroovy gives you System.out as as the simple out global variable that you can use to write to the console, but you can use the logger too. I like using slf4j, so sometimes I'll include in my scripts:import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Logger logger = LoggerFactory.getLogger(
com.liferay.portal.scripting.groovy.internal.GroovyExecutor.class);
logger.info("Groovy script Hello World is running...");
Why would you want to use a logger in this way? Maybe you want a record of the scripts execution and results in the log so you can find them later after you've moved away from the Scripting control panel. Maybe you want the output in the logs because other people are running scripts and you're counting on the logging to keep track of them all. Maybe you don't like the way the Execute button in the Scripting control panel is all the way at the bottom, and if you have a lot of output text you have to scroll forever to get to the Execute button to run it again, and logging instead of using out for the messages means you won't need to scroll.I don't know, I'm sure you might find a reason and, if you do, you now know it's an option. Oh and you don't have to use the GroovyExecutor class like I did here. You could use another class or you could pass any old string to get a logger.ConclusionThe Scripting control panel and Groovy Scripts represent power tools for you to keep handy in your toolbox. They offer an ability to execute dynamic code fully leveraging the Liferay API so any verification and validation Liferay does as part of the API call will be in effect for your script.Although Groovy has a lot of its own special syntax that you can use, my scripts often look like straight-up Java code. Most of the time I work in Java so avoiding special Groovy syntax will help me avoid trying to use Groovy syntax in regular code where I can't use it at all. But please feel free to use it if it feels natural or a better or more concise way to code for a script. You do you!I hope you can use some of these snippets I've shared.If you have a library of snippets that you use or other useful tidbits, please drop them in a comment below! [Less]
|
Posted
about 4 years
ago
by
Ashley Yuan
The new Liferay IntelliJ 1.9.3 plugin supports IntelliJ 2021.1. Head over to this page for downloading. Release Highlights
improve validation for package name in new spring mvc wizard
bug fix for rerun a docker server
improvements on creating liferay
... [More]
spring mvc portlet project
support creating a liferay server outside of liferay workspace
add validation to unable new docker server outside of liferay workspace
Screenshots [Less]
|
Posted
about 4 years
ago
by
Jamie Sammons
Announcing Liferay Commerce 4.0 GA2We’re excited to continue expanding and improving upon the Liferay Commerce product. The latest version of Commerce offers some minor SEO improvements and many bug fixes.Download optionsLiferay Portal and Liferay
... [More]
Commerce share the same Bundle and Docker image. To get started using either Liferay Portal or Liferay Commerce, choose the best download option suited for your environment below.Docker imageTo use Liferay Portal 7.4 CE GA2:docker run -it -p 8080:8080 liferay/portal:7.4.1-ga2For more information on using the official Liferay docker image see the liferay/portal repo on Docker Hub.Bundles and other download optionsIf you are used to binary releases, you can find the Liferay Portal 7.4 CE GA2 and Liferay Commerce 4.0 release on the download page. If you need additional files (for example, the source code, or dependency libraries), visit the release page.Dependency ManagementIf you are developing on top of Liferay Platform, and want to update your workspace to use the dependencies from the latest version, you will just need to use a single dependency artifact. Just add the following line to your build.gradle file:dependencies {
compileOnly group: "com.liferay.portal", name: "release.portal.api"
}All portal dependencies are now defined with a single declaration. When using an IDE such as Eclipse or IntelliJ all apis are immediately available in autocomplete for immediate use. By setting a product info key property it will be possible to update all dependencies to a new version by updating the liferay.workspace.product property in the liferay workspace projects gradle.property file:liferay.workspace.product = portal-7.4-ga2When using an IDE such as Eclipse or IntelliJ all apis are immediately available in autocomplete for immediate use.Liferay Commerce FeaturesSEOImproved Liferay Sitemap SupportLiferay Commerce category pages and product detail pages are now automatically included in the Liferay sitemap.Friendly URL FlexibilityThe friendly URL for Commerce Display Pages can now be customized to improve search presentation. The /p/ in the default pattern /site-name/p/product-name can now be replaced with a string like /product/ to make the URL /site-name/product/product-name. Similarly, /g/ can be replaced with a string like /category/ for category display pages.DocumentationAll documentation for Liferay Portal and Liferay Commerce can now be found on our documentation site: learn.liferay.com. For more information on upgrading to Liferay Portal 7.4 CE GA2 see refer to the Upgrade Overview.Compatibility MatrixLiferay's general policy is to test Liferay Portal and Liferay Commerce against newer major releases of operating systems, open source app servers, browsers, and open source databases (we regularly update the bundled upstream libraries to fix bugs or take advantage of new features in the open source we depend on). Liferay Portal and Liferay Commerce were tested extensively for use with the following Application/Database Servers: Application Server
Tomcat 9.0
Wildfly 17.0
Database
HSQLDB 2 (only for demonstration, development, and testing)
MySQL 5.7, 8.0
MariaDB 10.2, 10.4
PostgreSQL 12.x, 13.x
JDK
IBM J9 JDK 8
Oracle JDK 8
Oracle JDK 11
All Java Technical Compatibility Kit (TCK) compliant builds of Java 11 and Java 8
SearchElasticsearch Version(s) 7.9.x-7.13.xSource CodeSource is available as a zip archive on the release page, or on its home on GitHub. If you're interested in contributing, take a look at our contribution page.Bug ReportingIf you believe you have encountered a bug in the new release you can report your issue by following the bug reporting instructions.Getting SupportSupport is provided by our awesome community. Please visit helping a developer page for more details on how you can receive support.Fixes and Known Issues
Fixes
List of known issues
[Less]
|