Posted
almost 4 years
ago
by
Jamie Sammons
Announcing Liferay Portal 7.4 CE GA2Liferay Portal 7.4 CE GA2 marks the second major release in the 7.4 release cycle. Although released later than hoped, Liferay Portal 7.4 CE GA2 continues following the rolling release cycle announced last year
... [More]
for 7.3. Liferay Portal 7.4 CE GA2 contains many exciting new features across almost every aspect of the platform with many new features being based on feedback received from our community. Please see below for instructions on getting started as well as a comprehensive list of all of the great new features found in the release.Download optionsLiferay Portal and Liferay Commerce share the same Bundle and Docker image. To get started using either Liferay Portal or Liferay Commerce, choose the best download option suited for your environment below.Docker imageTo use Liferay Portal 7.4 CE GA2:docker run -it -p 8080:8080 liferay/portal:7.4.1-ga2For more information on using the official Liferay docker image see the liferay/portal repo on Docker Hub.Bundles and other download optionsIf you are used to binary releases, you can find the Liferay Portal 7.4 CE GA2 and Liferay Commerce 4.0 release on the download page. If you need additional files (for example, the source code, or dependency libraries), visit the release page.Dependency ManagementIf you are developing on top of Liferay Platform, and want to update your workspace to use the dependencies from the latest version, you will just need to use a single dependency artifact. Just add the following line to your build.gradle file:dependencies {
compileOnly group: "com.liferay.portal", name: "release.portal.api"
}All portal dependencies are now defined with a single declaration. When using an IDE such as Eclipse or IntelliJ all apis are immediately available in autocomplete for immediate use. By setting a product info key property it will be possible to update all dependencies to a new version by updating the liferay.workspace.product property in the liferay workspace projects gradle.property file:liferay.workspace.product = portal-7.4-ga2When using an IDE such as Eclipse or IntelliJ all apis are immediately available in autocomplete for immediate use.Liferay Portal FeaturesExperience ManagementEnable advanced file uploader options from item selectorLPS-130203We’ve extended the capabilities of the item selector to help users perform some advanced operations without having to move to the admin section of documents and media. Now it is possible to create folders to organize your documents or use the file uploader to fill all the metadata information directly while creating content.Allow using categories created in Asset Libraries from the connected sitesLPS-121460Now content creators can use vocabularies and categories defined in an Asset Library to categorize content created within a site, allowing for centralizing the management of these vocabularies and helping reuse the same categories in different sites.Provide a moderation system for QuestionsIn addition to the regular workflows already in place, we are introducing a new dedicated moderation workflow for questions. This workflow is based on contribution, allowing moderators to only review new questions/answers or comments coming from newly created users while for users with more experiences, their messages will be approved automatically. As the questions app uses message boards as backend, to enable this new workflow, admins need to go to System/site settings -> message boards. You can choose the number of contributions needed before messages are approved automatically.Support display page templates for CategoriesLPS-105049Non-Technical users can currently add categories to the content they create. Commerce products can easily reach thousands of categories that are associated with various types of content: commerce products, blog entries, web content, etc. Creating a page for each of those categories and a navigation is a time consuming process that could lead to errors. We’ve added the ability to automatically create display pages based on categories to reduce the human errors and time to create all these pages.Display Twitch, Vimeo, Youtube and Facebook videosLPS-128602Video marketing is a popular marketing strategy to increase the new acquisition of customers. This is because users generally prefer videos over static imagery and text. Document and Media has anticipated that trend by managing external urls for videos without having to have the video on Liferay. These types of videos didn’t have the adequate support to display them on a page. We now have a fragment that displays videos from popular video platforms whose urls are managed in Liferay called “External Video”.Give flexibility to Non-technical users to visually create and style NavigationLPS-113825Modern menus are more and more customized with a unique combination of items mixing text, images, and collections. We have previously delivered fragment composition and drop zones and wanted to offer users the possibility to re-use these functions to create tailor made mega-menus. Currently the way to display a NavMenu can be done using the menu display fragment or the menu navigation display. But modern menus are more and more customized with a unique combination of items mixing text, images, and collections. We have previously delivered fragment composition and drop zones and would like to offer users the possibility to re-use these functions to create tailor made mega-menus. For this multiple function.Preview a Display Page Template with a concrete content before publishing itLPS-103821When displaying a content multiple things can impact the final look, like the content length, the images sizes, the related collections, etc. In this case it is necessary to both be able to visually control the look and feel and the content present on a page. One example of this would be authoring a web content display page with a collection of related items. Being able to author the Display Page Template while previewing with concrete contents provides a much better experience to DPT users and lowers the risk of creating display pages with wrong contents.Digital OperationsConfirmation field for text and numeric fieldsLPS-125860In several use cases, there is some critical information that needs to be confirmed by the user. In many cases when a user is creating an account for example they need to input their email and then confirm it, so there are less chances of mistakes.The confirmation field is a new option for Text and Numeric Fields available on the form builder.New extension point between form pagesLPS-129467New extension point built in order to provide more flexibility in Form's customizations.New extension point in Rule Builder ConditionsLPS-129417New extension point built in order to provide more flexibility in Form's customizations.Platform ImprovementsManage permissions of entities from the APIsThe APIs for headless delivery now expose the permissions applied to both collections and individual elements. It is possible to retrieve the permissions for specific roles as well as modifying the permissions applied to roles directly from the APIs.Substitute xuggler with an alternative libraryDue to the deprecation of xuggler, we substituted the support of xuggler for building previews for audio and video files. This capability is directly managed through ffmpeg from now on.A new, type-safe global state APILPS-122065There is a new API for sharing and synchronizing state across apps, globally, in Liferay DXP. The @liferay/frontend-js-state-web API provides a set of functions for reading and writing global state, as well as subscribing to updates, in a type-safe manner. Under the covers, you can think of it as a global key-value store. The API is not intended to be a store for all state in an application, the guideline to “keep component state as close as possible to where it’s needed” is still valid.Setting for exporting Asset Links or notLPS-83011By default, Asset Links are exported, even if no content is selected. This causes extra information to be present in the LAR that may not be necessary. With this new development Asset Links could be configured to control whether or not they are always included in the LAR. Setting this to false still results that the Asset Links are exported if they have been changed.Export/Import configuration entries have been reorganized and clarifiedLPS-126769At the Export/Import system/instance settings there are several configuration possibilities. However, not all influences each of the Export/Import, Staging publish process and Site Templates propagation functions. There are configurations that are specific for one of these functions.A new Staging section has been created and the configurations which affect only Staging have been moved to there.Also, “Export/Import” was renamed to “Export/Import, Staging”. Configurations under this section affects all the Export/Import, Staging and Site Template propagation features and that was added to the description.DocumentationAll documentation for Liferay Portal and Liferay Commerce can now be found on our documentation site: learn.liferay.com. For more information on upgrading to Liferay Portal 7.4 CE GA2 see refer to the Upgrade Overview.Compatibility MatrixLiferay's general policy is to test Liferay Portal and Liferay Commerce against newer major releases of operating systems, open source app servers, browsers, and open source databases (we regularly update the bundled upstream libraries to fix bugs or take advantage of new features in the open source we depend on). Liferay Portal and Liferay Commerce were tested extensively for use with the following Application/Database Servers: Application Server
Tomcat 9.0
Wildfly 17.0
Database
HSQLDB 2 (only for demonstration, development, and testing)
MySQL 5.7, 8.0
MariaDB 10.2, 10.4
PostgreSQL 12.x, 13.x
JDK
IBM J9 JDK 8
Oracle JDK 8
Oracle JDK 11
All Java Technical Compatibility Kit (TCK) compliant builds of Java 11 and Java 8
SearchElasticsearch Version(s) 7.9.x-7.13.xSource CodeSource is available as a zip archive on the release page, or on its home on GitHub. If you're interested in contributing, take a look at our contribution page.Bug ReportingIf you believe you have encountered a bug in the new release you can report your issue by following the bug reporting instructions.Getting SupportSupport is provided by our awesome community. Please visit helping a developer page for more details on how you can receive support.Fixes and Known Issues
Fixes
List of known issues
[Less]
|
Posted
almost 4 years
ago
by
David H Nebinger
So some time ago I wrote the Cluster Details marketplace plugin. Basically you drop this guy into all of your Liferay clusters and you get a nifty little control panel that tells you details about the cluster, all of the nodes which have joined the
... [More]
cluster and it identifies who the leader of the cluster is at the moment:Not super complicated, but it is a handy little tool to identify details about your cluster.Previously it was only available for Liferay 7.0, but I've just updated it for 7.0 through 7.4, CE and DXP.Additionally, there's a new Gogo command introduced in this version. If you log into Gogo and invoke the command liferay:clusterFckd and it will dump the same info into your Gogo console.Find version 1.1.0 in the Liferay Marketplace here: https://web.liferay.com/marketplace/-/mp/application/124517019Enjoy! [Less]
|
Posted
almost 4 years
ago
by
Ashley Yuan
The new Liferay IntelliJ 1.9.2 plugin supports IntelliJ 2021.1. Head over to this page for downloading. Release Highlights
enable docker suppport
restructure new liferay module wizards
improve validation on project name in new liferay module wizard
... [More]
display gogo shell port in liferay server configuration
update embedded blade cli to latest 4.0.9 SNAPSHOT
support deploy projects to a docker server
improve build service action on a service-builder project
add OSGi component code completion for single properties(community contribution)
Screenshots Creaing liferay module projectUsers can create a new liferay module project through File > New > Module… or right clicking on the project > New > Module…Users need to enter a valid name first and click the Next button to go to the next page, and then they can choose the project template. About liferay docker supportRight click on the liferay workspace project and choose Liferay > InitDockerBundle. There will be a liferay docker server added automatically once finished downloading a docker image and creating a new docker container. Clicking on the Run/Debug button to start the docker server. To deploy a project to your docker server, right clicking on the project and choose Liferay > DockerDeploy Other screenshots FeedbackIf you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (INTELLIJ project), we are always around to try to help you out. Good luck! [Less]
|
Posted
almost 4 years
ago
by
Neil Griffin
The following versions of PortletMVC4Spring were released on June 2, 2021 AD:
Version
Description
5.3.0
For use with Spring Framework 5.3.x (tested with 5.3.7)
Release Notes
5.2.2
For use with Spring Framework 5.2.x (tested with
... [More]
5.2.15.RELEASE)
Release Notes
5.1.3
For use with Spring Framework 5.1.x (tested with 5.1.20.RELEASE)
Release Notes
For download coordinates and archetype usage, see the project's README.md document.At the time of this writing and the release of the aforementioned versions, there are no significant differences between the source files of the master (5.3.x), 5.2.x, and 5.1.x branches. The main purpose of having three branches is for the convenience of testing with three different versions of the Spring Framework.Web FlowThanks to the heroic work of Liferay's Fabian Bouché, we have adopted the portlet-related source that was pruned from the Spring Web Flow project. The code is found in a separate com.liferay.portletmvc4spring.webflow.jar artifact and there is a Web Flow version of the Applicant portlet demo as well. Thanks Fabian!Download Statistics As shown in the download stats graph above, the PortletMVC4Spring project is enjoying great adoption by developers! There has been an increasing trend over the past year which is very encouraging!The breakdown for the framework and security jars reflects the same trend. With close to 2,000 downloads/month of the framework jar, we can be sure that there are A LOT of Spring MVC portlets out there: Many thanks to all in our community for using PortletMVC4Spring.Enjoy! [Less]
|
Posted
almost 4 years
ago
by
Manushi Jani
You can configure the site page with above shown string and fetch it using below code :ServiceContextThreadLocal.getServiceContext().getRequest().getParameter("srce")orParamUtil.get(ServiceContextThreadLocal.getServiceContext().getRequest(), "srce"
... [More]
, "")You can also fetch this in velocity template :#set ($serviceContext = $portal.getClass().forName("com.liferay.portal.service.ServiceContextThreadLocal").getServiceContext())#set ($httpServletRequest = $serviceContext.getRequest())#set($seminarId =$paramUtil.getLong($httpServletRequest, 'srce')) [Less]
|
Posted
almost 4 years
ago
by
Marcial Calvo Valenzuela
Check english version hereUna de las necesidades más comunes a las que nos enfrentamos en instalaciones de Liferay Portal/DXP es la motorización, así como el sistema de alertas que nos avisa si algo no funciona como debe (o como se espera)Ya sea un
... [More]
servicio web administrado en la nube, una máquina virtual, o un cluster de Kubernetes, tenemos la necesidad de monitorizar en tiempo real toda la infraestructura e incluso de disponer de un histórico. Con el fin de mitigar esta necesidad, se desarrolló Grafana.Grafana es un software libre basado en licencia de Apache 2.0, que permite la visualización y el formato de métricas. Con él podremos crear dashboards y gráficos a partir de múltiples fuentes. Uno de esos tipos de fuentes es Prometheus.Prometheus es un software de código abierto que permite la recolección de métricas mediante pull HTTP. Para la extracción de estas métricas es necesario disponer de exportadores que realizarán la extracción de las métricas desde el punto deseado. Por ejemplo, para las métricas del cluster de Kubernetes podríamos utilizar kube-static-metrics. En nuestro caso, queremos monitorizar Liferay Portal/DXP, lo cual en una instalación on-premise lo haríamos mediante protocolo JMX utilizando herramientas como JConsole, VisualVM para realizarlo en caliente o bien utilizando un APM que extraiga y persista esta información con el fin de disponer de un histórico del comportamiento de la plataforma. En este caso utilizaremos JMX Exporter, mediante el cual extraeremos los mBeans de la JVM de Liferay Portal/DXP para que seguidamente Grafana pueda leerlos, almacenarlos y con ellos crear nuestros dashboards y alertas. Además usaremos el container Advisor (cAdvisor) de Kubernetes para obtener las métricas que el cluster expone acerca de nuestros contenedores. Requisitos
Un cluster de Kubernetes
Liferay Portal/DXP desplegado en Kubernetes. Si no dispones de ello, puedes seguir el siguiente blog que realicé unos meses atrás para desplegar Liferay Portal/DXP en Kubernetes.
¡ Comenzamos !Configurando Exporter JMX para TomcatLo primero que haremos será extraer las métricas por JMX desde el Tomcat de nuestro Liferay Portal/DXP. Para ello utilizaremos el exporter de JMX. Este exporter lo ejecutaremos como un javaagent dentro de las JVM Opts de nuestro Liferay Portal/DXP.
Haciendo uso de nuestro Liferay Workspace, añadimos el folder jmx y en él incluimos el .jar del javaagent y el fichero yaml para configurar el exporter. En este fichero yaml estamos incluyendo algunos patrones para extraer los mBeans como métricas: contadores para monitorizar las solicitudes de tomcat, sobre la sesión, servlets, threadpool, database pool (hikari) y las estadísticas de ehcache. Estos patrones son totalmente adaptables y extensibles para las necesidades que puedan surgir.Seguidamente necesitaremos construir nuestra imagen Docker y pushearla a nuestro registry para actualizar nuestro deployment de Liferay Portal/DXP con nuestra última imagen, o bien configurar nuestra imagen de Liferay Portal/DXP con un configMap y volumen para añadir esta configuración.
Mediante la variable de entorno LIFERAY_JVM_OPTS disponible en la imagen Docker de Liferay Portal/DXP, incluiremos el flag necesario para ejecutar el javaagent. Siguiendo la documentación del exporter de JMX necesitaremos indicar el puerto en el cual expondremos las métricas y el fichero de configuración con nuestros patrones de exportación de mBeans. En mi caso, he incluido esta variable de entorno utilizando un configMap donde agrupo todo este tipo de variables de entorno, pero podría ser insertada directamente en el deployment de Liferay Portal/DXP:Abriremos el puerto 8081 al contenedor, llamándolo “monitoring” e incluiremos la anotación necesaria para que Prometheus enganche las métricas a los pods que ejecuten este tipo de contenedores, así como indicarle el puerto sobre el cual extraerá las métricas para que Prometheus enganche únicamente con este puerto:Con esta configuración, ya podremos arrancar nuestro primer pod de Liferay Portal/DXP y en el puerto 8081 tendremos un endpoint “/metrics” donde se están exponiendo las métricas configuradas. Si quisiéramos consultarlo, podríamos abrir un nodePort al puerto 8081 para acceder a ello, pero no es necesario.
Instalando PrometheusAhora instalaremos Prometheus en nuestro cluster:
Primero de todo necesitaremos crear el namespace donde implementaremos Prometheus. Lo llamaremos “monitoring”.Seguidamente, incluiremos los recursos del cluster a monitorizar. Necesitaremos aplicar el siguiente manifesto yaml, con el que crearemos el role “prometheus” y lo asociará al namespace “monitoring”. Esto aplicará los permisos “get”, “list” y “watch” a los recursos del cluster listados en “resources”. En nuestro caso, añadiremos los pods y los nodos de k8s. Los pods para acceder al endpoint /metrics donde está exponiendo métricas nuestro JMX exporter y los nodos para poder acceder a la CPU y Memoria solicitada por nuestros contenedores de Liferay Portal/DXP:
Necesitaremos crear el siguiente configMap para configurar nuestro Prometheus. En él indicaremos la configuración del job que realizará el scrape de las métricas que el Exporter JMX está generando en cada pod de Liferay Portal/DXP además del job que extraerá las métricas del container Advisor de Kubernetes, con el fin de monitorizar CPU y Memoria de los contenedores. En él, además indicamos que el intervalo para leer estas métricas es de 5s:
Ahora desplegaremos Prometheus en nuestro namespace monitoring. Primero crearemos el PVC donde Prometheus almacenará su información y seguidamente aplicaremos el manifesto yaml con el deployment de Prometheus. En él incluimos la configuración previa realizada en el paso 2 y abrimos el puerto 9090 al contenedor.
Seguidamente crearemos el Servicio que expondrá nuestra instancia de Prometheus para acceder a él. En este caso utilizaremos un nodePort para acceder al puerto destino:
Y una vez desplegado… si accedemos a la ip de nuestro cluster y al nodePort 30000 veremos Prometheus:
Accediendo a targets (http://:30000/targets) veremos el estado de nuestros pods configurados para exportar métricas. En nuestro caso, un único pod con Liferay Portal/DXP:
Desde la página principal de Prometheus, podremos insertar las queries para extraer las métricas deseadas. Estas queries deben ser en formato PromQL. Para aprender un poco más acerca de este Query Language, puedes leer los conceptos básicos desde la página oficial de Prometheus y coger algunos ejemplos:Si nos fijamos en la salida de la consulta, podemos ver, además del valor de salida de la misma, la procedencia de la métrica: el pod name, namespace, ip, release name ( en el caso de que usemos HELM para su despliegue, etc)Además podremos consultar la salida de la métrica en forma de gráfico:
Instalando y configurando GrafanaGrafana es la plataforma de análisis de métricas que nos permite consultar y crear alertas sobre los datos obtenidos desde un origen de métricas, como es nuestro Prometheus. El punto fuerte de Grafana es la creación de dashboards muy potentes totalmente a medida y persistentes en el tiempo, los cuales además pueden ser compartidos. La generación de alertas por medio de muchos canales como email, slack, sms, etc es también un punto muy potente. Al igual que hemos hecho con Prometheus, necesitaremos instalar Grafana en nuestro Cluster de k8s. Realizaremos el despliegue de Grafana en el mismo namespace monitoring.
Crearemos en el namespace "monitoring" el siguiente configMap que nos servirá para aplicarle a Grafana la configuración necesaria para enganchar con Prometheus. Básicamente lo que hacemos en él es crear un datasource cuyo endpoint es el servicio de Prometheus creado anteriormente: prometheus-service.monitoring.svc:8080
Crearemos el PVC donde Grafana almacenará la información com el siguiente manifesto yaml
Crearemos el deployment de Grafana en "monitoring" aplicando el siguiente manifesto. En él abrimos el puerto 3000 al contenedor.
Crearemos el servicio que expondrá Grafana y el puerto 3000 lo abrimos por un nodePort al 32000 de nuestro cluster:
Ahora ya podremos acceder a nuestra instancia de Grafana:
Accediendo con el usuario administrador por defecto (admin y password admin) podremos empezar a crear nuestros dashboards:
Grafana es muy potente y muy adaptable, por lo que los gráficos que se deseen crear requerirán de un tiempo de trabajo para dejarlos al gusto del usuario que los quiera explotar. En el caso de una instalación de Liferay Portal/DXP en k8s, estoy monitorizando la memoria del contenedor, uso de cpu del contenedor, uso de la memoria de la JVM y tiempo de la Garbage Collector, pools de base de datos y pool de conexiones HTTP, además de un dashboard para monitorizar el porcentaje de cache misses sobre las caches de Liferay Portal/DXP:
Es posible configurar alertas para cuando algún dashboard alcance algún threshold deseado. Por ejemplo, cuando el porcentaje de cache misses sea superior al 90% durante 5m, configuraremos la siguiente alerta:Podremos monitorizar el histórico de las alertas para el dashboard y obtener información exacta del momento en el que se alcanzó el umbral para analizar el problema.
Conclusión
Monitorizar nuestra instalación de Liferay Portal/DXP en Kubernetes, de manera totalmente a medida, es posible gracias a Prometheus y Grafana. Además no sólo dispondremos de una monitorización en caliente de la plataforma, si no que dispondremos de un histórico que nos ayudará a analizar el comportamiento de la misma ante situaciones problemáticas. [Less]
|
Posted
almost 4 years
ago
by
David H Nebinger
IntroductionHistorically developers have always wanted to be able to control site creation.It's not that Liferay admins are untrusted or anything like that. It is really more the case where setting up a new site can get complicated and the steps to
... [More]
create the site could be quite long. A site may have initial documents, web contents (with structures and templates), pages to create and populate with theme settings, layouts (for widget pages), portlets or fragments placements, segment definitions, permissions, custom fields, configuration, ...Leaving all of that to a manual process, no matter how well documented it may be, well that's just asking for trouble.One of the first ways that we (developers) tried to solve this problem was through the use of the LAR files. We get the site just the way we want it in a lower environment, then we export the LAR file and load it into a new site in a new environment. It sounds good, but in practice this is often challenging when LAR file loading fails but more importantly if the LAR really needs changes at load time due to site configuration and what not. The LAR process is just not flexible enough to handle all of the cases that we might need to consider when loading.For a long time Liferay provided (and actually still does provide) a framework called the Resource Importer. The RI can load XML resources and non-XML assets from a known filesystem layout, and the loading itself could apply interpolation so the resources themselves were more like templates than just concrete instances. The RI itself has a lot of functionality and capability built into it, but outside of not being well documented it was also a little dated. RI cannot handle the new content pages or fragments, for example, plus it is missing other things that are necessary to be able to instantiate a site for later versions of Liferay 7.x.Liferay recognized these and other shortcomings with the RI, so they set out to design a new way to load sites, and the Site Initializer was born.What is the Site InitializerSite Initializers are the cool way to create new sites. Note that I didn't say "cool new way to create new sites" because actually the Site Initializer has been around since Liferay 7.1.So what is a Site Initializer anyway? Well, first it is an interface, the com.liferay.site.initializer.SiteInitializer interface, so it is intended to be an OSGi @Component that you'll define for your own SI implementation. The SiteInitializer implementation is a programmatic way to populate a site, so it is a tool for developers.But most of all a SiteInitializer is a blank canvas. It is really a tool to pre-create everything you need in a new site: every page, every layout, every portlet, widget or fragment in the page, roles, users, permissions, folders and files, forms, etc.That's great, you might ask, but how do you get started? First of all, it is backed by an interface so we're going to use a module to contain the SiteInitializer implementation. It is standard practice to use a separate module for your SI implementation because typically you'll have a number of resource files and dependencies that your SI will be loading, so separating it from your other modules will help prevent module pollution.So if we wanted to create an Example Site Initializer, we might create the module in our Liferay Workspace in the modules directory using the command:blade create -t api -p com.example.theme.example.site.initializer.internal example-site-initializerWe'll have some cleanup to do, but at least we have a module ready to start our additions. The first thing I do is update my bnd.bnd file for my new initializer:Bundle-Name: Example Site Initializer
Bundle-SymbolicName: com.example.site.example.site.initializer
Bundle-Version: 1.0.0
Web-ContextPath: /site-example-site-initializerThen I create my concrete implementation class like so:@Component(
immediate = true,
property = "site.initializer.key=" + ExampleSiteInitializer.KEY,
service = SiteInitializer.class
)
public class ExampleSiteInitializer implements SiteInitializer {
public static final String KEY = "example-initializer";
@Override
public String getDescription(Locale locale) {
ResourceBundle resourceBundle = ResourceBundleUtil.getBundle(
"content.Language", locale, getClass());
return LanguageUtil.get(resourceBundle, "example-description");
}
@Override
public String getKey() {
return KEY;
}
@Override
public String getName(Locale locale) {
ResourceBundle resourceBundle = ResourceBundleUtil.getBundle(
"content.Language", locale, getClass());
return LanguageUtil.get(resourceBundle, "example");
}
@Override
public String getThumbnailSrc() {
return _servletContext.getContextPath() + "/images/thumbnail.png";
}
@Override
public void initialize(long groupId) throws InitializationException {
// this is where the magic starts...
}
@Override
public boolean isActive(long companyId) {
return true;
}
@Reference(
target =
"(osgi.web.symbolicname=com.example.site.example.site.initializer)"
)
private ServletContext _servletContext;
}The remaining step is to provide the missing parts, namely the resources...SI Module PartsAn SI module is going to contain a couple of basic parts:
A concrete class that implements the SiteInitializer interface and is registered as an OSGi @Component.
In the src/main/resources for our project, all of the additional files we need to use to populate the new site with.
Reads like a short list, but once you start building one of these you'll find that it is going to take some time and energy to get right.The SiteInitializer interface is pretty short - here's the one that I grabbed from 7.4 CE:public interface SiteInitializer {
public String getDescription(Locale locale);
public String getKey();
public String getName(Locale locale);
public String getThumbnailSrc();
public void initialize(long groupId) throws InitializationException;
public boolean isActive(long companyId);
}The getKey() method returns the key for the site initializer, and all keys should be unique. Make sure when you're selecting your key that there will be no conflict with other possible SI implementations. A good way to do this is to use the name of your initializer with a "-initializer" suffix; for example, the Commerce Minium site initializer uses a key of "minium-initializer" and the Speedwell SI uses a key of "speedwell-initializer".getName() and getDescription() return the name and the description for the SI using the provided Locale, and normally we just route this through the resource bundle for the module to return the correct values.getThumbnail() provides the path to the thumbnail source image for the SI.The isActive() method is used to test whether the SI should be active or not. Typically we use this method to check to see if a specific theme the SI is using is deployed or not, and only return true if the theme is available in the portal.The magic happens in the initialize(final long groupId) method. We're given the id of the new site group to populate, and the rest is up to us as developers to build out.SI Resources and InitializationSo we know the magic happens in the initialize() method, but what does that magic consist of? In the initial version implemented for 7.1, there really wasn't much more additional magic to point at. In fact, you can look at the SI implementation class for 7.1's Fjord theme, the Porygon theme and the Westeros Bank theme and you'll see that there is a lot of code but not a lot of supporting code in the implementations. The one thing they did standardize on was to use JSON resource files to hold the details used to drive the site population.Since then, well honestly not much magic has been added in. When you check the latest 7.4, for example, we can look at the Welcome site initializer (the default welcome page that you get in new Liferay instances) and there's really a lot of low-level code in there.And really, that's kind of the point. The SI provides the framework for handling declaring your SI implementation, providing a name and description so an admin can decide if they want to use your SI or not, and when they do control gets passed to your initialize() method so you can take care of all of the details your SI needs to do.While the Welcome site initializer from 7.4 is pretty simple, you also get the Minium Site Initializer and Speedwell Site Initializer for the Commerce themes. These two are very good examples of the kinds of control you can apply when creating a SI.Well, I really need to correct myself here... There actually is some available magic in 7.3 and 7.4, and it comes from the commerce-initializer-util module. If you check out the AssetTagsImporter, the BlogsImporter, the JournalArticleImporter, etc. you'll find a number of implementations which will use JSON resource files to load the related resources into Liferay.Any SI that I build leverages these commerce-initializer-util classes when I can, and if I find one that is missing I'll typically create them using these classes as templates of a clean way to implement them.ConclusionSo this was your grand tour of Liferay's Site Initializers.As you build and deploy your initializers, you can go in and create a new site in the control panel and pick your site initializer and see your new site with all of your resources loaded in, ready to go.This is an important thing about the Site Initializer - it really is only used when the site is first being created. After your site is created, you can't reapply a different SI implementation or apply changes from your updated SI to an existing site - it is only used during site creation. If you need something that can load resources into an existing site, you'll have to dig into the Resources Importer (fortunately there's already a blog for that: https://liferay.dev/en/b/thinking-outside-of-the-box-resources-importer) or some other technique to apply site changes.But for creating a new site using a known set of resources, it is really hard to beat the Site Initializers...Let me know below in the comments if you've created a Site Initializer and maybe some of the tricks you used in your implementation, or if you have ideas to make the SI's a little easier to implement, share them too, I'd love to hear them. [Less]
|
Posted
almost 4 years
ago
by
David H Nebinger
IntroductionLiferay has, for a long time, supported RBAC, the role-based access control. It is, of course, backed by the database so (inheritance aside for the moment) a user will be assigned a list of roles that is mostly fixed or unchanging. To
... [More]
change these role assignments, typically an admin is going to log into Liferay and, using the control panel, add or remove roles as necessary.This works well in a land where everything is neat and tidy and stable and not varied by external context. But here in the real world, we know that things are not neat, tidy, stable or free from external contextual influence.The Use CasesSounds a little out there, right? Maybe some examples will help...Sue is our normal Liferay administrator Monday through Friday, but on weekends Tom is the administrator. While we could just give them both the Administrator role and forget about it, from a security perspective this would not be as controlled as only giving them the Administrator role on days when they actually are the administrator.Or consider shift work, where Bill approves workflows during the day, but Donna is taking over at night.Yesterday Phil informed us he's going out on paternity leave for 6 weeks starting at the 1st of the month. During that time, Venkat will be taking over Phil's role as content publisher. I suppose could make a note to have the Liferay admin make the role changes on the 1st of the month and again six weeks later.Mikayla is one of our content editors, but for security purposes the content editors should only be allowed to edit content when they are either on the internal network or coming through the VPN; if they're logging in from home, they should not be able to edit content. As a security feature, this will ensure that if someone is able to get Mikayla's credentials for her Liferay account, they won't be able to just log in and start editing content.Some of these use cases we can solve by just giving out the roles to the users and trust they will use their access correctly. Some we can solve by having the Administrator log in at the appropriate time and make the changes. Some, like Mikayla's case, can't be solved by a fixed list of roles.All of these cases could be solved however if we programmatically changed role assignments. Through code we could tell which day of the week it was and give Admin to Sue or Tom as appropriate. Through code we could handle the time of day aspect of shift work to give or take roles. Through code we could grant the content publisher role to Venkat starting at the 1st of the month and ending 6 weeks later. And through code we can solve the problem of taking away roles if the user is not connecting internally or via the VPN, something we can't do otherwise.In order to handle these kinds of situations, the only avenue I could find to make this sort of thing doable is to code up a solution to add or remove a role dynamically via a post login hook, basically changing the list of assigned roles when the user logged in knowing that list of assigned roles would work for that login session, and the next time they logged in it could be changed to a different set, ... The idea was that for a specific login session, I knew what roles they would have assigned, but it was still primarily a fixed list of roles that didn't change during the session.This wasn't a perfect solution, though, because the user may be logged in before the change is supposed to happen (i.e. Venkat logs in on the last day of the month) and stays logged in through the time the triggering event makes the change, but if they don't log out they wouldn't see the change.So clearly something better was needed, and fortunately it was delivered to us starting in Liferay 7.2...Role ContributorsThat old way of handling role changes got tossed out the door as of Liferay 7.2, and it has been replaced with a new service interface, the com.liferay.portal.kernel.security.permission.contributor.RoleContributor. This interface is pretty short, so I'll just throw it in here:/**
* Invoked during permission checking, this interface dynamically alters roles
* calculated from persisted assignment and inheritance. Implementations must
* maximize efficiency to avoid potentially dramatic performance degredation.
*
* @author Raymond Augé
*/
public interface RoleContributor {
public void contribute(RoleCollection roleCollection);
}Like I said, pretty short right? The RoleCollection object is a bit longer, but I'm going to include it here anyway:/**
* Represents a managed collection of role IDs, starting with the
* initial set calculated from persisted role assignment and role
* inheritance. The roles can be contributed via {@link
* RoleContributor#contribute(RoleCollection)}.
*
* @author Carlos Sierra Andrés
* @author Raymond Augé
*/
@ProviderType
public interface RoleCollection {
/**
* Adds the role ID to the collection.
*
* @param roleId the ID of the role
* @return true if the role ID was added to the collection
*/
public boolean addRoleId(long roleId);
/**
* Returns the primary key of the company whose permissions are being
* checked.
*
* @return the primary key of the company whose permissions are being
* checked
*/
public long getCompanyId();
/**
* Returns the primary key of the group whose permissions are being checked.
*
* @return the groupId of the Group currently being permission checked
*/
public long getGroupId();
/**
* Returns the IDs of the initial set of roles calculated from persisted
* assignment and inheritance.
*
* @return the IDs of the initial set of roles calculated from persisted
* assignment and inheritance
*/
public long[] getInitialRoleIds();
public User getUser();
public UserBag getUserBag();
/**
* Returns true if the collection has the role ID.
*
* @param roleId the ID of the role
* @return true if the collection has the role ID;
* false otherwise
*/
public boolean hasRoleId(long roleId);
/**
* Returns true if the user is signed in.
*
* @return true if the user is signed in; false
* otherwise
*/
public boolean isSignedIn();
public boolean removeRoleId(long roleId);
}So the RoleContributor is the service interface, and you can create a class that implements the RoleContributor interface and register it as an @Component, and Liferay will call it as part of the normal permission checking activities.That is an extreme over-simplification of what is actually going on behind the scenes, but there are some important things to keep in mind...The RoleContributors will only be called once per thread (request), regardless of how many actual permission checks are performed to service the request. Liferay will invoke all of the RoleContributors in each thread, so avoid doing crazy stuff lest you severely and negatively effect response time performance.The only context details available are going to be the same details available normally to the PermissionCheckers; that means user, group, company plus you get the list of roles that come from the persisted or inherited sources (minus any adjustments made by other RoleContributor instances) and the current list of role ids (includes persisted and inherited roles as well as changes made by other RoleContributors). Other context details you might need may have to come from an external context source such as a ThreadLocal or some other static provider.The operations available to you on the RoleCollection allow you to add a role or remove a role, plus you can test if the user has a role (includes the persisted roles, the inherited roles, plus any additions made by other RoleContributors). The operations seem simple, but they can have a significant impact if you're not careful. Imagine a RoleContributor that all it does is invoke the removeRoleId() for the Administrator role indiscriminately; no one would effectively have the Administrator role on the platform. Even worse, imagine a RoleContributor that adds the Administrator role to everybody!Single RoleContributor implementations do not need to do everything; Liferay will invoke all registered implementations and they each individually can add/remove role ids as necessary (you can control ordering by using the service ranking property). So lean towards making separate, simple RoleContributor instances rather than One RoleContributor to Rule Them All...How to Implement a RoleContributorSo to implement a RoleContributor, we just need to implement the interface and register as an @Component OSGi service.Let's look at the solution we might implement for Mikayla's case, since that's one that couldn't be solved any other way. In order to solve her case, we will need to know the client IP address for the current request, and as covered earlier we don't have this context available within the RoleContributor itself. There is the AuditRequestThreadLocal that is used as part of the Audit framework (it does exist in CE, CE is just missing some of the plumbing that is part of the full Auditing framework provided in DXP) and it has the client IP address for the request, so as long as the AuditFilter is enabled, the AuditRequestThreadLocal will contain the contextual detail we need.Our implementation would look like:/**
* class StripContentEditorRoleWhenOffsiteRoleContributor: Strips
* the "Content Editor" role from the user when they are offsite.
*
* @author dnebinger
*/
@Component(
immediate = true,
service = RoleContributor.class
)
public class StripContentEditorRoleWhenOffsiteRoleContributor
implements RoleContributor {
private static final String CONTENT_EDITOR_ROLE_NAME = "Content Editor";
private Map _contentEditorRoleIdMap = new HashMap<>();
/**
* contribute: This is the main entry point for the
* RoleContributor interface.
* @param roleCollection The current role collection.
*/
@Override
public void contribute(RoleCollection roleCollection) {
// so we need to first determine if the current
// user is offsite or not
String clientIPAddress = AuditRequestThreadLocal
.getAuditThreadLocal().getClientIP();
// Using this we will determine if the user is offsite
boolean offsite = _isIPAddressOffsite(clientIPAddress);
if (!offsite) {
// we don't have to change anything
return;
}
// the current user is offsite, so we should remove
// any "Content Editor" roles the user might have.
// We first need to get the role id
long contentEditorRoleId =
_getContentEditorRoleIdForCompany(
roleCollection.getCompanyId());
if (contentEditorRoleId == 0) {
// no role found for this company, nothing to do
return;
}
// remove the role from the role collection instance
roleCollection.removeRoleId(contentEditorRoleId);
// that's it, we're done!
}
/**
* _getContentEditorRoleIdForCompany: Roles can be duplicated across
* different instances (companies) in the portal, so we'll look up
* the role id for the current company.
* @param companyId Company id to get the role for.
* @return long The role id.
*/
private long _getContentEditorRoleIdForCompany(final long companyId) {
// did we cache the role id?
long roleId = _contentEditorRoleIdMap.get(companyId);
if (roleId > 0) {
// we have already looked up the role id for this company, return it
return roleId;
}
// we have not yet looked it up, do so now
Role role = _roleLocalService
.fetchRole(companyId, CONTENT_EDITOR_ROLE_NAME);
if (role == null) {
// there is no content editor role for this company
return 0;
}
roleId = role.getRoleId();
// put in the cache so we have it in the future
_contentEditorRoleIdMap.put(companyId, roleId);
return roleId;
}
/**
* _isIPAddressOffsite: A utility method to evaluate an
* IP address and determine
* if it originates from offsite or not.
* @param ipAddress The IP address to check.
* @return boolean true if the address is offsite.
*/
private boolean _isIPAddressOffsite(final String ipAddress) {
// do magical work to see if this is an internal IP address or not
// we'll fake out here and just return true, but a normal implementation
// would want to really evaluate this
return true;
}
@Reference
private RoleLocalService _roleLocalService;
}So a couple of things to point out... I am using the AuditRequestThreadLocal to get the current client IP address for the request.Additionally, I'm using a local cache via the HashMap to track the role IDs for each company, so if the role lookup takes some time to run, I'm only going to do it one time, the rest of the time I'm just going to use the cached value. Another option would have been to use an @Activate method to find all of the existing "Content Editor" roles to pre-populate the map, that would save me the runtime lookup when I need the info, but you get the general idea.ConclusionAnd that's basically it. For the other examples we started with...For Sue/Tom, we could have a RoleContributor that checks first if the user is Sue or Tom (user is part of the RoleCollection interface), then we could check the day of the week, and if it is M-F we add the Administrator role to Sue, and for Sat/Sun we would add the Administrator role to Tom.For the shift work, it's a similar deal. When it is a particular user or users, we check the current time of day and if it falls within their shift, we add the role.For Phil and Venkat, I likely wouldn't use this technique. I know, a curve ball, right? Well here I wanted to demonstrate that this is not the solution to every problem. Because Phil is going to be out for a solid 6 weeks and Venkat will need the role for that entire time, if I were going to solve this programmatically I'd use two scheduled jobs to handle the role change at the 1st of the month and change back 6 weeks later, but even this would be overkill. This case is best handled by just having the Admin change the roles manually.If however you wanted to stick with the RoleContributor, it's going to follow the model shared earlier... If the user is Phil or Venkat, we verify that the current date is within the 6 week timeframe and, if it is, give the role to Venkat and take it away from Phil, otherwise do nothing.So some final thoughts...Always always always consider performance as your number 1 concern in these implementations. Liferay will absolutely call this code for each and every request, and if you have slow code here, you will totally notice the performance hit and capacity impact of your portal.Use the most liberal caching methods you can to reduce ongoing overhead. In my example I would do a role lookup only once per company, but no more. The rest of the code is going to be just executing the java within the class itself so performance impact in general will be negligible.All RoleContributors will be invoked by Liferay (order can be imposed by using the service ranking component property), and it is considered better to have multiple simple RoleContributor implementations instead of one complex RoleContributor. But if using a single contributor offers better performance due to shared cached data, that should be considered as a valid reason not to separate into smaller implementations.Care should be taken when implementing one of these. If you remove a role from someone who should have it or give a role to someone who shouldn't, you'll have a hard time diagnosing these things after the fact since the changes made by these implementations are not persisted, they're not audited, and basically they leave no trace.That's all for this blog. If you do happen to build one of these, do me a favor and share your use case in the comments below. I'm interested to hear how this can be used to solve scenarios that maybe I haven't thought of yet. [Less]
|
Posted
almost 4 years
ago
by
Yuxing Wu
Downloads:Liferay Portal 7.2 GA2: Patch | Readme
All vulnerabilities fixed in these patches have already been fixed in Liferay Portal 7.3 GA7. Please refer to the readme file for a list of issues addressed in each patch. For more information on
... [More]
working with patches, please see Patching Liferay Portal .Thanks to Arun Das and Dominik Marks, binary builds of the patches are available:
From Arun, Liferay Portal 7.2: Link 1 | Link 2
From Dominik, Liferay Portal 7.2: Link 1
Disclaimer: Binary patches have not been tested by Liferay [Less]
|
Posted
almost 4 years
ago
by
Ashley Yuan
The new installers and IDE 3.9.3 ga4 has been made available. Community Downloadhttps://liferay.dev/project/-/asset_publisher/TyF2HQPLV1b5/content/ide-installation-instructionsCustomers Download
... [More]
https://customer.liferay.com/downloads/-/download/liferay-workspace-with-developer-studio-3-9-3
https://customer.liferay.com/downloads/-/download/liferay-workspace-installer-2021-04-30
Release highlightsInstallers Improvements:
Add Liferay portal 7.4 ga1 support
Set default liferay 7.3 version to 7.3.6 GA7
Development Improvements:
Support creating liferay 7.4 projects
Improved deployment support for Liferay 7.x
Integration of Blade CLI 4.0.8
Support for Liferay Workspace Gradle 3.4.8
Improvements to the Liferay upgrade plan
Add breaking changes for liferay 7.4
Add preview for remove legacy files step
Bug fixes
Some screenshots FeedbackIf you run into any issues or have any suggestions please come find us on our community forums or report them on JIRA (IDE project), we are always around to try to help you out. Good luck! [Less]
|