|
Posted
over 6 years
ago
by
David H Nebinger
I was recently fishing for topics to build potential sessions for Devcon 2019 (you're going, right? ;-) ) and one community member suggested the topic of Service Trackers.
While I'm not sure I could fill a whole session on service trackers, I think
... [More]
I can certainly fill a blog entry about them...
What Are Service Trackers
The best way I can describe service trackers, well they are programmatic ways to deal with dependency resolution, a programmatic option to the @Reference annotation.
Service trackers are used to track the addition, modification or removal of DS components, and the programmatic aspects give you a lot of control over that tracking.
Simple Service Tracker Example
The simplest examples of a service tracker can be found in the XxxLocalServiceUtil classes that Service Builder 7.x generates for you:
public static GuestbookEntryLocalService getService() {
return _serviceTracker.getService();
}
private static ServiceTracker _serviceTracker;
static {
Bundle bundle = FrameworkUtil.getBundle(GuestbookEntryLocalService.class);
ServiceTracker serviceTracker =
new ServiceTracker(bundle.getBundleContext(),
GuestbookEntryLocalService.class, null);
serviceTracker.open();
_serviceTracker = serviceTracker;
}
In this simple example, the bundle (always going to be the bundle the ServiceTracker is being being created in, not the bundle w/ the interface the ServiceTracker is tracking) is used to get the BundleContext and create an org.osgi.util.tracker.ServiceTracker instance.
The code declares it is building a ServiceTracker over a specific component service type, in this case the GuestbookEntryLocalService interface. The constructor shown takes a BundleContext (for OSGi access), the service class, and null in place of a ServiceTrackerCustomizer instance (to filter available services).
When the service tracker is created, it needs to be open()ed, afterwards the _serviceTracker.getService() method can be used to return the current instance the tracker has available.
If there are no service instances available, getService() will return null.
If there are multiple service instances available, getService() will return only one of those, but there is no guarantee which instance will be returned.
Service Tracker Customizers
A ServiceTrackerCustomizer is used to provide touchpoints for event handling for services (adds, modifies and removes) and can also filter service instances.
Say you have a Shape interface, and you're interested in getting new shapes, but you only want to track shapes that have an even number of sides (squares, hexagons, octagons, etc.). Normally a ServiceTracker is notified of all shape instances that are added and removed, but through your ServiceTrackerCustomizer you can filter the services that are tracked.
Here's our ServiceTrackerCustomizer for the even-number-sided shape services:
private static class ShapeServiceTrackerCustomizer implements ServiceTrackerCustomizer {
@Override
public Shape addingService(ServiceReference serviceReference) {
Map properties = serviceReference.getProperties();
if ((GetterUtil.getInteger(properties.get("sides")) % 2) == 1) {
// odd number of sides
return null;
}
// has even sides
Shape service = _bundleContext.getService(serviceReference);
// check the shape for sides
if ((shape.getSides() % 2) == 1) {
// odd number of sides, destroy the instance before returning.
service.destroy();
return null;
}
return service;
}
@Override
public void modifiedService(ServiceReference serviceReference, Shape service) {
}
@Override
public void removedService(ServiceReference serviceReference, Shape service) {
service.destroy();
}
}
Those properties you assign inside of the @Component annotation? Those are the properties we're getting from the service reference. We can then get the number of sides defined in the properties and decide if we want to track the service or not.
Or we can poll the shape directly for the sides as long as the api supports it.
Either way, we keep the even number of sides shapes but discard the odd sides.
Service Tracker Lists
Tracking single services is not all that interesting. I mean, we could do a lot of what we've seen so far with an @Reference along with a target filter.
Imagine though that we are building a paint portlet and we use the shape implementations to draw on the canvas. We don't know in advance what shapes we might get, but as new shapes are added we want to be able to include and use them.
The Service Tracker itself can give us support to access a list of Shapes, but Liferay also provides the ServiceTrackerList utility class.
The ServiceTrackerListFactory can create ServiceTrackerList instances for tracking a list of services.
We can create a ServiceTrackerList for our Shapes using the following example:
private ServiceTrackerList shapesTrackerList =
ServiceTrackerListFactory.open(bundle.getBundleContext(), Shape.class,
new ShapeServiceTrackerCustomizer());
This convenience method creates the service tracker plus it opens the tracker.
The ServiceTrackerList interface provides methods to get the current size (number of available services) and an iterator over services.
Now our paint portlet can use the ServiceTrackerList and it will automatically be notified of new shapes being added and removed from the system. The paint portlet doesn't need to do anything special, it can just iterate over the list of shapes and make them available to the user.
Service Tracker Maps
Lists are interesting, but another useful collection are ServiceTrackerMaps.
Service tracker maps use a sort of mapping where the key often comes from the service component properties.
Liferay's DDM facilities offer a practical example. The DDM types implement an interface, but the implementation has a name such as "Text", "Radio", etc. Using a ServiceTrackerMap, you can still have all of the different service implementations, but in a map form you can access a specific instance from the name property.
If our shapes have a Name property so we can track Circles, Triangles, Squares, etc and we want to be able to find the Shape implementation using the name, we can construct our ServiceTrackerMap as follows:
private ServiceTrackerMap shapesTrackerMap =
ServiceTrackerMapFactory.openSingleValueMap(bundle.getBundleContext(), Shape.class, "name");
Shape square = shapesTrackerMap.getService("Square");
Now not only is our map completely dynamic as bundles are deployed and undeployed, but we are populating a map from the component properties and we can use our key value to locate a specific service instance.
The ServiceTrackerMapFactory also has openMultipleValueMap() methods that use a > map so you can have multiple services for given keys.
Conclusion
This has been a whirlwind introduction to the Service Trackers including the ServiceTrackerCustomizer. We also saw the service tracker lists and maps. Interestingly, both lists and maps support the ServiceTrackerCustomizers too, so you have filtering control over the lists and maps too.
If you're wondering about practical uses for ServiceTrackers, ServiceTrackerCustomizers, ServiceTrackerLists and/or ServiceTrackerMaps, I'd point you to the Liferay source code. Liferay relies heavily on all of these classes and you can find some great examples of each. [Less]
|
|
Posted
over 6 years
ago
by
David H Nebinger
Liferay 7.2 has an updated Service Builder that introduces some changes that we should all be aware of.
From a Dependency Injection perspective, not really much is going to change for us when it comes to consuming the services. Just as we did for
... [More]
7.0 and 7.1, we will still have a member variable in our @Component for a service instance and then @Reference inject it in. As long as the service is available, we're off to the races...
From a service development perspective, we have some new options and constraints.
The Dependency Injector Choice
The first change is a new attribute on the tag, "dependency-injector". This new attribute has two possible values:
"spring" - This is the classic dependency injection implementation we all know and love. Spring is used to instantiate your service layer and the beans will be exposed into OSGi for easy @Reference injection.
"ds" - This is a new OSGi-ified rewrite for the service layer which discards Spring and uses regular OSGi Declarative Services to expose the service implementations and handling @Reference injection.
For new Service Builder projects built using the latest Service Builder, the default value for the attribute is going to be "ds". The ramifications of this choice will be discussed below.
If you wish to stick with the classic implementation, it is easy to do. Just set the "dependency-injector" attribute to "spring" and it will stay with Spring. An example for doing this would be:
...
The DS Injector
The DS injector is the OSGi Declarative Services injector. Using DS, your service components are now registered as components by decorating with the @Component annotation. It also means that you can add dependencies into your components using the @Reference annotation, you don't need to figure out how to use the @ServiceReference or @BeanReference annotations for the Spring-based Dependency Injection or use a ServiceTracker to manually find an OSGi component.
Sounds great, right?
Well, there are some changes that come into play, things you have to watch out for when going down this path...
bnd.bnd Directive
First of all, you have to ensure that the following directive is in your bnd.bnd file:
-dsannotations-options: inherit
For the DS injector, superclasses of your XxxxLocalServiceImpl class will have some @Reference injections. This directive will ensure that OSGi processes all @Reference injections not just in your own XxxxLocalServiceImpl class, but will also process injections on all of the superclasses.
Without this bnd directive, the @Reference injections will not occur on the superclasses, so you may encounter NullPointerExceptions when trying to use those superclass member fields.
Local Service References
In the Spring-based dependency injector, the superclass will have member fields for the local and remote services and persistence services for all of the entities defined in the service.xml. This makes it really convenient to build, for example, a delete of a Guestbook entity that also deletes entries in the guestbook:
public Guestbook deleteGuestbook(long guestbookId) {
guestbookEntryLocalService.deleteEntriesByGuestbookId(guestbookId);
return super.deleteGuestbook(guestbookId);
}
When using the DS injector, these references will not be in the superclass anymore.
If you need the GuestbookEntryLocalService, you have to add a member field to your GuestbookLocalServiceImpl class and @Reference it in.
Circular References
Now you may be thinking, "Well I'll just copy my code that declares the local services and @Reference them to every one of my XxxLocalServiceImpl classes, that way I'll have all of the services the same way I did before."
There's a problem with that thinking, it's the darn circular reference issue. You can have a circular reference if component A @References in component B, and component B @References in component A. Since A depends on B but B isn't available, A cannot start. Since there's no A, B's dependency is also not satisfied and it cannot start.
So basically if GuestbookLocalServiceImpl has an @Reference for a GuestbookEntryLocalService, your GuestbookEntryLocalServiceImpl class cannot @Reference the GuestbookLocalService or you'll end up with a circular reference and neither instance will start.
The tough part with this issue is that it can be fun trying to track down. Your guestbook-service module will easily deploy and start, but anything depending on either of these component services will not. So you'll be tracing it from the thing that is supposed to work but isn't, finding that it doesn't have the service component, only to get back to the component and trying to trace down the chain trying to figure out why things aren't available.
Like I said, fun times.
Resolving Circular References
There are ways to get around the circular dependency issue.
The first is to use ReferenceCardinality.OPTIONAL with your @Reference annotation. With the optional cardinality, OSGi will not require the reference to be satisfied for the component to start. So the GuestbookLocalServiceImpl with the following member field declaration:
@Reference(unbind="-",cardinality=ReferenceCardinality.OPTIONAL)
protected GuestbookEntryLocalService guestbookEntryLocalService;
With this change to the @Reference annotation, the GuestbookLocalService component will be able to start if there is a GuestbookEntryLocalService available or not. So the circle will be broken and, once a GuestbookEntryLocalService instance is available, OSGi will inject it.
There is a catch here, it is possible an instance of GuestbookEntryLocalService is never available, leaving this field as null and possibly causing a NullPointerException when you try to use it. If GuestbookEntryLocalServiceImpl had an @Reference dependency on a service that wasn't deployed, for example, it couldn't start and therefore the GuestbookLocalServiceImpl's dependency wouldn't be resolved.
Another option is not to rely on @Reference at all, instead using a ServiceTracker to track and use available services. You could add code like the following to GuestbookLocalServiceImpl:
public static GuestbookEntryLocalService getService() {
return _serviceTracker.getService();
}
private static ServiceTracker _serviceTracker;
static {
Bundle bundle = FrameworkUtil.getBundle(GuestbookEntryLocalService.class);
ServiceTracker serviceTracker =
new ServiceTracker(bundle.getBundleContext(),
GuestbookEntryLocalService.class, null);
serviceTracker.open();
_serviceTracker = serviceTracker;
}
You just invoke getService() method to get a service reference.
Again, however, this too would return null if there is not GuestbookEntryLocalService instance available.
Choosing The Right Dependency Injector
So you might be asking which dependency injector is the best for you?
If you use Spring, you get all of the services for entities in the service.xml file injected and available to use. Even though your services are exposed as OSGi components, you cannot use the @Reference annotation to inject any OSGi component services into your classes. To use OSGi components, you'd need to use a service tracker to get access to them.
If you use DS, you can use @Reference injection in your components, but you lose the automatic injection of services for entities in the service.xml file. You have to avoid the common traps like a missing bnd declaration and circular references.
Like I said at the beginning, consuming the services in your portlet or other code is easy, you just @Reference inject them in, so this aspect shouldn't play a role in the decision.
Me, personally? I kind of prefer the Spring injection. I don't want to have to manage the wiring of the services for entities in the same service.xml, plus I don't mind using service trackers when necessary to access OSGi services.
But I don't think you can go wrong using DS either, as long as you avoid the common traps. [Less]
|
|
Posted
over 6 years
ago
by
Dominik Marks
A customer wanted to integrate the german Servicekonto with Liferay. The Servicekonto is an authentication service used by (or at least planned to be used) by several german federal states. It offers an authentication mechanism for citizens and
... [More]
organizations. Every citizen can create an account using the new german identity card. As the Servicekonto offers to authenticate via OpenID connect, it should have been easy to integrate with Liferay.
So the customer configured OpenID connect in Liferay and tried to login with a Servicekonto account. Everything was properly configured, but the login failed with a strange error message:
2019-06-28 09:32:11.331 ERROR [https-jsse-nio-8443-exec-1][OpenIdConnectFilter:132] Unable to process the OpenID login
java.lang.NullPointerException at com.liferay.portal.security.sso.openid.connect.internal.OpenIdConnectUserInfoProcessorImpl.processUserInfo(OpenIdConnectUserInfoProcessorImpl.java:49)
We were not sure at first why it failed. An OpenID connect configuration for e.g. Google worked perfectly. After some research we found that it had to do something with the UserInfo Endpoint. That Endpoint is used to fetch additional information about the user after a successful login. Liferay e.g. uses the e-ail address to match the user against an existing Liferay user and to create a new user if it does not exist, yet. As section 5.3.2 of the OpenID connect specification says it is possible that the UserInfo Endpoint returns either JSON format or a JWT encryted message. The Servicekonto seems to return a JWT encrypted message. The Liferay implementation does not handle this case, the call to userInfoSuccessResponse.getUserInfo() returns null. The Liferay implementation does not check userInfoSuccessResponse.getUserInfoJWT().
Creating a test environment
To test if the above situation causes the problems we created a test environment. As there is no "local" installation of the Servicekonto we decided to setup a local Keycloak instance. Fortunately Keycloak offers a prepared Docker image, so installation was rather easy. After creating a client in Keycloak we configured it to return a JWT encrypted message for the UserInfo Endpoint. This can be done at Configure > Clients > Settings > Fine Grain OpenID Connect Configuration > User Info Signed Response Algorithm.
With this configuration Liferay threw the same exception as above.
To extend or to overwrite?
OpenID connect is handled in Liferay in the OpenIdConnectServiceHandlerImpl class. So the first thought was to replace the whole module with our own implementation. However we do not want to loose updateabilty, so we decided to just create a new component for the OpenIdConnectServiceHandler with a higher service ranking, so that Liferay would load our implementation instead of the default one.
Some problems arose by this: The default implementation uses Nimbus to handle the OpenID connection stuff. If our own module would deliver its own jars for Nimbus the classes would be incompatible and would throw a ClassCastException (same classes, but loaded by different classloaders). So we wrote a fragment for the default implementation and just added the required packages as Export-Package. By this we were able to re-use the existing libraries from the default implementation (David Nebinger has an interesting blog post on how to do that).
After fixing that we discovered that even if our implementation had a higher service ranking than the default implementation our component was not used. The reason behind this is that the references where the components are injected are reluctant. That means if a reference is already injected, it will not be replaced by a higher ranked component automatically. However it will be replaced if the service is reconfigured so that the previously injected component does no longer match the criterias. This can be done by creating new configuration files for the relevant components. The Liferay documentation shows how do that.
Our own implementation now handles JWT encrypted responses from the UserInfo Endpoint and falls back to the default implementation if the endpoint answers with a plain JSON response.
The Result
Logins using OpenID connect with a JWT encrypted answer from the UserInfo Endpoint now work properly. By this the customer can successfully login using the Servicekonto (and other systems with a JWT encrypted answer). The following screens show a working login using our test environment with Keycloak.
Final Notes
In a strict sense the "missing" functionality should be considered as a bug in Liferay. The above was done with Liferay 7.1.3 GA 4. As far as I can see the default implementation was not changed in 7.2 and in the master branch.
Liferay creates a user when logging in using OpenID connect. To make it work, you have to enable the Option allow strangers to create accounts.
[Less]
|
|
Posted
over 6 years
ago
by
Minhchau Dang
Over the last few months, we at Liferay Support have encountered multiple customers who were having trouble with MEMORY_CLUSTERED scheduled jobs disappearing in their clustered environments.
In trying to figure out why their clustered jobs were
... [More]
disappearing, we learned that the problem lay in a misunderstanding of the (undocumented) API that the customer was calling during their component deactivation. Along the way, I realized that while I had an intuition about how scheduling was supposed to work in a cluster, I’d never done a deep dive into the scheduler engine code to say for certain that any of my intuitions were correct, something that should be pretty important when working with an undocumented API.
As a disclaimer, this post is a result of an investigation to resolve that knowledge gap problem, a description of how I believe things currently work and how that way of working came to be, based on reading through commit histories and JIRA tickets.
Understanding SchedulerEngine
What is SchedulerEngineProxyBean and why is its implementation so weird?
If you look at Liferay code, you’ll see a few classes with names ending in ProxyBean. When you open them up, you’ll see that they implement some interface, but the implementation of every method of that interface does nothing except throw an UnsupportedOperationException. Looking at those, you might end up being really confused, and find them weird, but there’s a reason they’re written that way.
However, before that, a little bit of backstory.
Before OSGi, Liferay introduced a Util class for every interface in order to allow instances of classes not managed by Spring (portlets, for example) or classes that lived outside of the Liferay Spring context (third-party web applications, for example) to call instances of Liferay’s Spring-managed classes.
Before Liferay addressed the inherent performance problem of long advice chains (see Shuyang’s blog from 2011 for additional details), Util classes also functioned to instantiate advice-like wrappers around existing implementations. For example, scheduler was initially implemented so that every call to the API was converted into a message bus message, with the real scheduler engine implementation happening in a message bus worker thread.
This design of putting wrapper-like logic into a Util class resulted in a particular side-effect. Since all of the message bus routing logic lived in the Util class, this meant that if you had a reference to the Spring-managed instance of SchedulerEngine, it would invoke the method directly, since your method calls would only be routed to the message bus if you used Util instead. In LEP-7304, the message bus routing was converted into a wrapper class, but the fundamental difference between a Util class and a direct reference was maintained.
Eventually, Liferay did solve that problem of long advice chains, and LPS-14031 rewrote the API layer of scheduler so that message bus routing was implemented using an advice. This is when SchedulerEngineProxyBean was introduced.
You can think of this implementation pattern as the following:
Start with a dummy implementation (SchedulerEngineProxyBean), usually with a name that ends with ProxyBean
Use a factory so that Spring knows to instantiate a proxy that wraps the dummy implementation
Use the invocatation handler provided for the proxy (MessagingProxyInvocationHandler) to convert the method call into a message bus message
With a listener that listens to messages on the message bus destination, deserialize the message bus message and invoke the method on the real implementation
Through this proxy bean pattern, Liferay provides the API in core, and then it provides the implementation in a separate Marketplace plugin. It also provides consistency between calling the Util and calling the Spring-managed instance. You can see similar approaches in things like workflow.
With the move to OSGi, Util classes still exist, but now they exist for classes that are not managed by OSGi to call instances of OSGi-managed classes.
What kind of API did Liferay provide with SchedulerEngine?
When scheduler was first introduced in LEP-6187, it had a very simple API, which basically served as a wrapper around specific Quartz Scheduler methods. At the time, they were really just the same methods that you would call in the Quartz Quick Start Guide.
start = Scheduler.start
shutdown = Scheduler.shutdown
schedule = Scheduler.scheduleJob
unschedule = Scheduler.unscheduleJob
Later, with LPS-7391 Liferay thought it would be interesting to create a portlet that would allow you to manage scheduled jobs. So with the subtask described in LPS-7395, Liferay added an API to allow you to modify existing jobs.
update
delete
Liferay also assumed that people wouldn’t want to just modify scheduled jobs, but maybe they’d want to write scheduled jobs on the fly. Therefore, Liferay also added an API method to allow you to add a Groovy script that would run at a scheduled time.
addScriptingJob
In order to make informed decisions about why a scheduled job needed to be updated, with LPS-7397, Liferay also provided an API to retrieve metadata about existing scheduled jobs.
getScheduledJob
getScheduledJobs
Out of all that, Liferay thought that you might also want to know about timings of a scheduled job, so as part of LPS-7397, Liferay also provided those helper methods as well:
getStartTime
getEndTime
getNextFireTime
getFinalFireTime
Another natural question to ask is also about past timings of a scheduled job.
getPreviousFireTime
getJobExceptions
In addition, Liferay also provided a way to interact with that metadata, by allowing you to pause and resume scheduled jobs, check whether the job had been paused, and also let you know if there were any
getJobState
pause
resume
Unfortunately, as Liferay prepared for its 6.0 release, Liferay’s UI/UX team got pulled into designing newer features that would be added to that release, and the user interface for managing an older feature like scheduler never got designed. Ultimately, the whole idea of a built-in portlet to manage scheduled jobs ended up in the product backlog, never to be heard from again.
Why did Liferay introduce a SchedulerEngineHelper?
Later, with LPS-23998, Liferay also needed a way to derive a cron expression, based on the same concepts as Liferay’s Calendar portlet, in order to improve the way we were implementing scheduled staging. The logic was also added to SchedulerEngine, because it looked like a common concern for anything wanting to create scheduled tasks.
getCronText
With LPS-25385, Liferay decided to broadcast an audit event any time that a scheduled job was fired, and so we added a method to the SchedulerEngine interface that would be called whenever a scheduled job fired.
auditSchedulerJobs
After reflecting on this a little more, we realized that from the standpoint of separation of concerns, saying that every implementation of a SchedulerEngine needed to also provide getCronText to derive a cron expression as well as auditScheduledJob to broadcast an audit event didn’t make any sense. Additionally, addScriptingJob, wasn’t really a special function of a scheduler, either.
Therefore, with LPS-29425, we introduced a new layer, SchedulerEngineHelper, that would provide the implementation of broadcasting an audit event, while also serving as an adapter that could simplify how developers interacted with the SchedulerEngine, without requiring the SchedulerEngine to provide any implementation relating to those interactions, allowing SchedulerEngine itself to be relatively stable. We also moved any other methods that allowed you to retrieve specific metadata about a scheduled job, since getScheduledJob contained all the metadata, the individual accessors could just delegate and then retrieve the individual fields.
Understanding SchedulerEngineHelper
What changed with SchedulerEngineProxyBean in 7.x?
Let’s keep in mind what we got from SchedulerEngineProxyBean in earlier releases. Through this proxy bean pattern, Liferay provides the API in core, and then it provides the implementation in a separate Marketplace plugin.
When you think about it like that, this is the same benefit you get with OSGi. Therefore, with Liferay 7.x, Liferay decided to reimplement scheduler as OSGi-managed components, rather than Spring-managed components injected into OSGi.
As a result, we effectively lost the benefit of Spring advices. However, rather than reconsider our decision, and rather than move all the logic back to the Util again, Liferay opted to divide all the concerns related to scheduler across a long implementation chain:
QuartzSchedulerEngine is provided as an OSGi component, disabled by default
ModulePortalProfile uses the concept of a portal profile gatekeepers (an undocumented Liferay API that makes it easier to conditionally enable components that are disabled by default) to enable QuartzSchedulerEngine (mentioned above) and SchedulerEngineHelerImpl (mentioned later)
QuartzSchedulerProxyMessageListener waits for a reference to QuartzSchedulerEngine and registers itself to the liferay/scheduler_engine message bus destination
SchedulerEngineProxyBeanConfigurator instantiates a SchedulerEngineProxyBean, registers it as something that routes messages to the liferay/scheduler_engine message bus destination (similar to the old Util class in earlier Liferay releases), and registers it as an OSGi service with scheduler.engine.proxy.bean=true
A SchedulerEngineConfigurator (which will be enabled using a portal profile gatekeeper) waits for a reference to scheduler.engine.proxy.bean=true, and on activation registers a scheduler with scheduler.engine.proxy=true
SingleSchedulerEngineConfigurator (only enabled for 7.0.x CE) registers an unclustered scheduler
ClusterSchedulerEngineConfigurator (only enabled for 7.0.x DXP, and all releases after that when clustering was re-added to CE) registers the original unclustered scheduler if cluster.link.enabled=false, or a clustered wrapper around the proxy bean if cluster.link.enabled=true
SchedulerEngineHelperImpl waits for a reference to scheduler.engine.proxy=true
Developers who want to work with the scheduler obtain a reference to the SchedulerEngineHelper service (rather than to the SchedulerEngine service), and make calls against the provided API. Of course, that provided API isn’t well-documented right now, which causes a lot of confusion when using scheduler, so we’ll move onto documenting that next.
What new methods were added to SchedulerEngineHelper with 7.x?
When SchedulerEngineHelperImpl is activated, it will create a ServiceTracker that uses an internal class (SchedulerEventMessageListenerServiceTrackerCustomizer) to track when new instances of SchedulerEventMessageListener are registered and when existing instances are unregistered.
In other words, with 7.x, scheduled jobs are managed by creating components that provide the SchedulerEventMessageListener service.
When a new component providing the SchedulerEventMessageListener service is registered to OSGi, addingService will deactivate a thread local (used by SchedulerClusterInvokeAcceptor), populate a map that will effectively ask all callee nodes to ignore the call, broadcast the method call to the cluster (where the callee nodes will ignore it), and then attempt to add a trigger for the job on the local node.
When an existing component providing the SchedulerEventMessageListener service is unregistered from OSGi, removedService will deactivate a thread local (used by SchedulerClusterInvokeAcceptor), populate a map that will effectively ask all callee nodes to ignore the call, broadcast the method call to the cluster (where the callee nodes will ignore it), and then attempt to remove a trigger for the job from the local node.
With LPS-59681, Liferay added API methods to SchedulerEngineHelper to help register your scheduled job, which is probably a MessageListener if you’re coming from earlier releases:
register: Adapts a regular MessageListener as a SchedulerEventMessageListener, registers the adapted wrapper as an OSGi component, and remembers it so that it can be manually unregistered by calling unregister
unregister: Unregisters the SchedulerEventMessageListener component corresponding to the MessageListener, assuming you created one by calling register
You can create regular MessageListener classes and register them from OSGi, as is done in an existing blade sample (BladeSchedulerEntryMessageListener). Alternately, you can also create a component that provides the SchedulerEventMessageListener directly, but that comes with some caveats which will be described later in the code samples.
Understanding ClusterSchedulerEngine
Now that we have a basic understanding of SchedulerEngine, the next thing we want to know is how scheduler works in a clustered environment.
As noted earlier, ClusterSchedulerEngineConfigurator instantiates a ClusterSchedulerEngine, providing it with an existing SchedulerEngine that will perform all the actual work, as well as any other OSGi-managed classes (because the lifecycle of ClusterSchedulerEngine is managed by Liferay, not OSGi).
The real magic of ClusterSchedulerEngine doesn’t exist in the class itself, but in ClusterableProxyFactory. We essentially instantiate a dynamic proxy around ClusterSchedulerEngine, and our invocation handler checks for a Clusterable annotation on every method declared by its target object. The elements set against that annotation will then be used to determine how that method call should work in a clustered environment.
If you’ve never seen an annotation before, you may want to read up on Annotation Basics from Oracle’s Java tutorials for additional background.
Clusterable
All methods on Clusterable currently have Javadocs, but it’s worth mentioning the methods again here.
onMaster: Whether to invoke this method on the master node rather than the local node (if the local node is not a master node). If set to true, Liferay will attempt to route the method call to the master node. If the current node is not the master node, parameters will be serialized on the invoking node and sent to the master node, and the return value will be serialized on the master node and sent back to the invoking node.
acceptor: If invokeOnMaster is not set (or set explicitly to false), Liferay will load a class implementing the ClusterInvokeAcceptor. Once you specify this value, Liferay will attempt to call an internal _invoke method within ClusterableInvokerUtil on every node. Within this invocation, each node will call the accept method of the ClusterInvokeAcceptor, and if it is true, it will proceed to invoke the original method that we had proxied.
The acceptor element can be confusing the first time you encounter it without reading the implementation for ClusterableInvokerUtil.invokeOnCluster. To summarize, within invokeOnCluster, Liferay calls ClusterableInvokerUtil.createMethodHandler to create a serializable method with additional environment information (which, if you’re familiar with functional programming, you can think of as equivalent to creating a closure) that will be invoked on all nodes on a cluster.
With that, we can understand that once the Clusterable annotation is set, and the onMaster element is set to false the caller node will unconditionally attempt to invoke the method on all nodes in the cluster. The ClusterInvokeAcceptor will be called on each callee node to determine whether the method call proceeds on that node.
In other words, the ClusterInvokeAcceptor is used to determine whether a callee node is ready to invoke the method. It does not prevent the caller node from broadcasting the invocation to the cluster.
ClusterSchedulerEngine
So now that we have a background on Clusterable, we can use that background to understand how ClusterSchedulerEngine does in a cluster.
First, let’s look at the methods where the annotation has the element onMaster set to true. It turns out that the only methods that are annotated in this way are the methods that retrieve metadata about scheduled jobs that we’d talked about when we went over LPS-7397.
getScheduledJob
getScheduledJobs
Therefore, we can think of this as saying that whenever you use the API to retrieve metadata about scheduled jobs, all nodes will ask the master node for that information. This means that no matter what node you are on (whether you’re doing it from a Groovy script or you’re writing your own scheduled job management logic), these API calls will provide what the master node believes is the state of those scheduled jobs.
Next, let’s look at the methods where the annotation has the element acceptor set. It turns out that these are all the methods that modify the state of a job, which were introduced in LEP-6187 (the initial creation of scheduler), LPS-7395 (the API we’d added for a scheduled job management portlet that never came to be), and LPS-7397 (more of the API we’d added for that scheduled job management portlet that never came to be).
All of them have the same value set (SchedulerClusterInvokeAcceptor.class), which is to say that every one of these methods uses the same rules to decide whether to execute the method. This means that if you attempt to modify the state of a scheduled job on any node, all of the other nodes will be informed of the method call.
An important thing to note is that each node decides whether to carry out the invocation via SchedulerClusterInvokeAcceptor. In other words, the acceptor does not decide whether the call is relayed to the rest of the cluster. All method invocations that do not have the onMaster element set to true will always
schedule
unschedule
update
delete
pause
resume
suppressError
What’s interesting is that when you look at the actual ClusterSchedulerEngine implementation in each of these methods, they have special logic to check whether they are the master node. In other words, for scheduler in particular, having an acceptor set is the same as onMaster; the only difference is whether all of the other nodes in the cluster should be notified.
This leaves the following methods that have no annotation, which include the initialization and the destruction of the scheduler engine implementation.
start
shutdown
To summarize everything so far, we know the following. There are exactly two methods in ClusterSchedulerEngine (start and shutdown) that are not broadcast to a cluster. With the built-in Liferay implementation based on Quartz, Quartz itself has some cluster-management capabilities built into it. As a result, while it’s possible that non-default implementations of a scheduler might want clustering logic, Liferay itself won’t provide this logic out of box because its default implementation does not need it
Importantly, for all other methods that are part of the SchedulerEngine interface, calling the API is equivalent to asking the master node to execute the method, while all other nodes update metadata assuming that the master node has completed that execution, unless that node is not ready to recognize that invocation call, as determined by SchedulerClusterInvokeAcceptor.
Disappearing Scheduled Jobs
So with all of this knowledge, we can now revisit the disappearing MEMORY_CLUSTERED scheduled jobs problem mentioned earlier.
To understand what happened, first we need some background information on MEMORY_CLUSTERED, and how it relates to everything we’ve learned about ClusterSchedulerEngine, which provides the functionality.
Conceptually, a MEMORY_CLUSTERED job was added in LPS-15343, and it equates to a job that is not persisted in any way (in the default Quartz implementation, it uses a RAMJobStore), but it retains the desirable property of a persisted job where only one node executes the scheduled job. This is achieved through ClusterSchedulerEngine, which adds a mechanism for limiting scheduled jobs to one node, by only notifying the scheduler engine of one node about the job.
In the initial implementation, the first node to successfully acquire an entry in the Lock_ table in Liferay would run MEMORY_CLUSTERED scheduled jobs. With LPS-51058, this converted into running jobs on the coordinator node elected by JGroups, which effectively resulted in MEMORY_CLUSTERED jobs always running only on the node designated as the JGroups coordinator node. Then, in order to resolve LPS-66858, Liferay added a solution where each node in a cluster maintained metadata on the timing of scheduled jobs, so that it would remember the proper start times for MEMORY_CLUSTERED jobs if it are chosen as the new JGroups coordinator.
The root cause of the problem lay in assumptions about what would happen with the following block of code:
@Deactivate
protected void deactivate() {
try {
_schedulerEngineHelper.unschedule(_schedulerEntryImpl, StorageType.MEMORY_CLUSTERED);
}
catch (SchedulerException se) {
}
_schedulerEngineHelper.unregister(this);
}
@Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
private volatile ModuleServiceLifecycle _moduleServiceLifecycle;
It’s easy to understand why the code was written in this way. Intuitively, the idea is that if you manually deactivate your component (usually by stopping a bundle, for example), you’ll want to make sure that you clean up any references to it as well.
However, in practice, unschedule isn’t a clean-up method. It specifically asks the scheduler engine implementation to stop running the scheduled job. In order to force a MEMORY_CLUSTERED job to stop running a scheduled job, the master node must stop running the job, and all nodes need to make sure that they also remember that the scheduled job shouldn’t be running any more.
While this works fine for a single node environment, or with manual deactivate, the trouble occurs when something a developer doesn’t anticipate deactivates the component, and this scenario was a server shutdown in a clustered environment.
To understand this more clearly, imagine a situation where you’ve shutdown a node. As part of the shutdown process, Liferay unregisters the ModuleServiceLifecycle service (once the server shutdown process starts, the portal is no longer initialized), and this component loses its reference to ModuleServiceLifecycle. Because our scheduled job component has flagged the ModuleServiceLifecycle as required (the default), OSGi proceeds to deactivate our scheduled job component and invokes our component’s deactivate method.
From here, the job will be unscheduled. The natural question follow-up is, "When will the scheduled job start to run again?"
Since unschedule is annotated with Clusterable, the call to unschedule will be broadcast to all active nodes of the cluster, and it will be executed on any node that would return true when SchedulerClusterInvokeAcceptor was invoked on that node. Naturally, all non-coordinator nodes in the cluster proceed to update their metadata and forget about the scheduled job, as a side-effect of LPS-66858, and the non-coordinator node will also stop running the job, so any time a new node starts up and tries to retrieve metadata on scheduled jobs, that job will be missing.
Since all nodes of the cluster have forgotten about the scheduled job (again, because unschedule actually unschedules the job), including any existing master node, then as a side-effect of LPS-66858, Liferay will never recover the lost scheduled job unless a node that is starting up is immediately designated as the JGroups coordinator as it starts up. In practice, this scenario will only happen when the entire cluster is brought down, and a new node is started, and the situation will reoccur on the next shutdown.
Liferay Scheduler Code Samples
We’ll assume that our scheduled job is configurable, following the tutorial on Making Your Applications Configurable in the developer guide, and that our configuration class, ExampleSchedulerConfiguration, has a getter method interval that returns the number of seconds between each scheduled job firing.
From there, I’ve created two sample scheduled job classes, ExampleMessageListener and ExampleSchedulerEventMessageListener, so that you can see the slight differences in the two implementations.
The two sample classes are described below.
Common Boilerplate
Next, we’ll think about all of the boiler plate that’s associated with creating scheduled jobs. This boiler plate is marked in the two sample scheduled job classes created for this post.
ExampleMessageListener
ExampleSchedulerEventMessageListener
We generate triggers using TriggerFactory, we’ll ask OSGi to provide us with a reference to it.
@Reference(unbind = "-")
private volatile TriggerFactory _triggerFactory;
Note that Liferay won’t provide a reference to a TriggerFactory if scheduler is completely disabled, which means adding a reference to a TriggerFactory also means that our component will also not activate if scheduler is disabled.
From here, as a convenience method, since every implementation of a scheduled job always needs to create a SchedulerEntry, we’ll have a utility method return one, based on a trigger generated from a configuration (in this case ExampleSchedulerConfiguration).
protected SchedulerEntry getSchedulerEntry(Map properties) {
ExampleSchedulerConfiguration configuration =
ConfigurableUtil.createConfigurable(
ExampleSchedulerConfiguration.class, properties);
Class> clazz = getClass();
String className = clazz.getName();
Trigger trigger = _triggerFactory.createTrigger(
className, className, null, null,
configuration.interval(), TimeUnit.SECOND);
return new SchedulerEntryImpl(className, trigger);
}
Because our scheduled job doesn’t have a fixed start time, it’s possible for the scheduled job to not fire immediately if we attempt to schedule it before Liferay begins scheduling jobs. This is because even though missed jobs will fire as long as it’s within the misfireThreshold, by default, this value is only five seconds, and Liferay doesn’t change the default for memory clustered jobs.
To avoid that problem, a common approach is to wait until the portal itself has started up. This next part is not necessary if your scheduled job has a predictable start time (cron expression, or you have a non-null start time that you’ve configured in some other way).
@Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
private volatile ModuleServiceLifecycle _moduleServiceLifecycle;
Then, we need code that will do the work of our scheduled job. In this example, we’ll just print a message using System.out.
protected void doReceive(Message message) throws MessageListenerException {
Class> clazz = getClass();
String className = clazz.getName();
System.out.println(className + " received message on schedule: " + message);
}
At this point, the implementations will diverge, because there are two ways you can implement a scheduled job in 7.x. The approaches do not differ very much in terms of lines of code, but we’ll outline both approaches in the coming sections in case one is easier to understand than the other.
Using BaseMessageListener
You can create a scheduled job by extending BaseMessageListener, as is done in a lot of modern Liferay code and in the ExampleMessageListener code sample created for this post. You can also do this by extending the deprecated BaseSchedulerEntryMessageListener, as is done in the Liferay blade samples.
In both cases, the convention is to use your current class as the provided service, so that nothing accidentally finds it, because this message listener doesn’t have any real meaning unless it’s wrapped (more on that later).
@Component(
configurationPid = "example.scheduler.entry.ExampleSchedulerConfiguration",
immediate = true, service = ExampleMessageListener.class
)
public class ExampleMessageListener
extends BaseMessageListener {
}
Next, when our component activates we’ll want to use SchedulerEngineHelper in order to: adapt our MessageListener as a SchedulerEventMessageListener, register the adapted wrapper as an OSGi component, and remember it so that it can be manually unregistered later. We do this by using the register method documented earlier.
We will also want to call this when the configuration is modified, because the register method will take care of updating the adapted wrapper using its service reference (code block). Once the adapted wrapper has its service reference’s configuration modified, this will trigger a modifiedService on the service tracker, which will also take care of updating the scheduled job. For this reason, in Liferay blade samples, you don’t see any special logic in @Modified annotated methods.
Finally, we’ll also want to make sure that the adapted wrapper is properly removed if our component gets deactivated (bundle stops, one of the service dependencies disappears). This can be done by calling unregister.
@Activate
@Modified
protected void activate(Map properties) {
_schedulerEngineHelper.register(
this, getSchedulerEntry(properties),
DestinationNames.SCHEDULER_DISPATCH);
}
@Deactivate
protected void deactivate() {
_schedulerEngineHelper.unregister(this);
}
@Reference(unbind = "-")
private volatile SchedulerEngineHelper _schedulerEngineHelper;
Using SchedulerEventMessageListener
You can also create a scheduled job by creating a component that provides the SchedulerEventMessageListener service and implements that interface, as is done in the ExampleSchedulerEventMessageListener code sample created for this post.
@Component(
configurationPid = "example.scheduler.entry.ExampleSchedulerConfiguration",
immediate = true, service = SchedulerEventMessageListener.class
)
public class ExampleSchedulerEventMessageListener
implements SchedulerEventMessageListener {
}
Next, we need to actually implement the interface, which requires that we return a SchedulerEntry. To achieve this, we make sure that we notice both when the component activated and when the configuration changes, and to make sure we return a SchedulerEntry that reflects the updated configuration. Liferay and SchedulerEventMessageListenerWrapper will automatically handle the rest.
@Activate
@Modified
protected void activate(Map properties) {
_schedulerEntry = getSchedulerEntry(properties);
}
public SchedulerEntry getSchedulerEntry() {
return _schedulerEntry;
}
private SchedulerEntry _schedulerEntry;
As long as all you want is a scheduled job, your implementation is complete as soon as you implement a receive method, which could just delegate work to the doReceive from the boiler plate code we had before (or you could name that method receive from the beginning).
However, we have to keep in mind that when switching from Spring to OSGi, Liferay still hasn’t quite broken free from using wrappers to implement functionality. This means that if we choose the SchedulerEventMessageListener route, we have to take care of any functionality implemented in the wrapper that we want to make use of in our implementation.
For example, if you’ve decided to enable auditing scheduled events (as noted earlier, this was a feature added with LPS-25385, disabled by default), SchedulerEventMessageListenerWrapper is what normally calls the API to broadcast the audit event, and so you’ll need to call it from your scheduled job as well.
@Override
protected void receive(Message message) throws MessageListenerException {
doReceive(message);
// Extra things done by SchedulerEventMessageListenerWrapper
// that we want our scheduled job to do as well.
try {
_schedulerEngineHelper.auditSchedulerJobs(
message, TriggerState.NORMAL);
}
catch (SchedulerException se) {
throw new MessageListenerException(se);
}
}
@Reference(unbind = "-")
private volatile SchedulerEngineHelper _schedulerEngineHelper;
Since there is no way for anyone to know how many implementation details will be added to the wrapper class, implementing SchedulerEventMessageListener directly isn’t common in Liferay code examples, because you would lose all of the additional functionality provided by the wrappers. [Less]
|
|
Posted
over 6 years
ago
by
Mohammed yasin
Liferay Forms are one of most powerful feature in Liferay with which we can create and edit form without code change. Today we will be seeing how to use Liferay Form in our Custom Module along with edit and update.
Step 1 . Create and Configure the
... [More]
Form by referring the below url
https://portal.liferay.dev/docs/7-2/user/-/knowledge_base/u/creating-and-managing-forms
Step 2 . Once a Form is generated, you can use the ID for referencing the form .
Step 3 . We will be creating a Liferay mvc Module which will render, add and edit this form.
Rendering the form use the below code
public void doView(RenderRequest renderRequest, RenderResponse renderResponse)
throws IOException, PortletException {
long formInstanceRecordId = ParamUtil.getLong(renderRequest, "formInstanceRecordId");
DDMFormInstanceRecord ddmFormInstanceRecord = null;
if(formInstanceRecordId > 0) {
ddmFormInstanceRecord = DDMFormInstanceRecordLocalServiceUtil.fetchDDMFormInstanceRecord(formInstanceRecordId);
}
DDMFormInstance ddmFormInstance = DDMFormInstanceLocalServiceUtil.fetchDDMFormInstance(34296L);
DDMStructureVersion ddmStructureVersion = null;
String html = StringPool.BLANK;
try {
ddmStructureVersion = ddmFormInstance.getStructure().getLatestStructureVersion();
} catch (PortalException e) {
e.printStackTrace();
}
DDMForm ddmForm = ddmStructureVersion.getDDMForm();
DDMFormLayout ddmFormLayout = null;
try {
ddmFormLayout = ddmStructureVersion.getDDMFormLayout();
} catch (PortalException e) {
e.printStackTrace();
}
DDMFormRenderingContext ddmFormRenderingContext =
new DDMFormRenderingContext();
ddmFormRenderingContext.setContainerId("ddmForm".concat(StringUtil.randomString()));
if(Validator.isNotNull(ddmFormInstanceRecord)) {
try {
ddmFormRenderingContext.setDDMFormValues(ddmFormInstanceRecord.getDDMFormValues());
} catch (PortalException e) {
e.printStackTrace();
}
}else {
ddmFormRenderingContext.setDDMFormValues(_ddmFormValuesFactory.create(renderRequest, ddmForm));
}
HttpServletRequest httpServletRequest = PortalUtil.getHttpServletRequest(renderRequest);
ddmFormRenderingContext.setHttpServletRequest(httpServletRequest);
ddmFormRenderingContext.setHttpServletResponse(PortalUtil.getHttpServletResponse(renderResponse));
ddmFormRenderingContext.setLocale(PortalUtil.getLocale(renderRequest));
ddmFormRenderingContext.setPortletNamespace(renderResponse.getNamespace());
ddmFormRenderingContext.setViewMode(true);
try {
html = _ddmFormRenderer.render(ddmForm, ddmFormLayout, ddmFormRenderingContext);
} catch (DDMFormRenderingException e) {
e.printStackTrace();
}
renderRequest.setAttribute("formHtml", html);
super.doView(renderRequest, renderResponse);
}
In view.js
${formHtml}
Note - 34296L which ddmFormIntanceId can be referred by giving option in configuration or taking input from user also ,For this example i am Hard cording .
Adding , Editing the Form use the below code
public void processAction(ActionRequest actionRequest, ActionResponse actionResponse)
throws IOException, PortletException {
ServiceContext serviceContext = null;
long formInstanceRecordId = ParamUtil.getLong(actionRequest, "formInstanceRecordId");
DDMFormInstance ddmFormInstance = DDMFormInstanceLocalServiceUtil.fetchDDMFormInstance(34296L);
DDMStructureVersion ddmStructureVersion = null;
try {
ddmStructureVersion = ddmFormInstance.getStructure().getLatestStructureVersion();
} catch (PortalException e) {
e.printStackTrace();
}
DDMForm ddmForm = ddmStructureVersion.getDDMForm();
DDMFormValues ddmFormValues = _ddmFormValuesFactory.create(actionRequest, ddmForm);
try {
serviceContext = ServiceContextFactory.getInstance(
DDMFormInstanceRecord.class.getName(), actionRequest);
} catch (PortalException e) {
e.printStackTrace();
}
if(formInstanceRecordId > 0) {
try {
DDMFormInstanceRecordLocalServiceUtil.updateFormInstanceRecord(serviceContext.getUserId(), formInstanceRecordId, Boolean.FALSE,
ddmFormValues, serviceContext);
} catch (PortalException e) {
e.printStackTrace();
}
}else {
try {
DDMFormInstanceRecordLocalServiceUtil.addFormInstanceRecord(serviceContext.getUserId(), serviceContext.getScopeGroupId(),
ddmFormInstance.getFormInstanceId(), ddmFormValues,
serviceContext);
} catch (PortalException e) {
e.printStackTrace();
}
}
}
Note - Once the Form is Save its reference Id ddmFormIntanceRecordId can be used for future reference. [Less]
|
|
Posted
over 6 years
ago
by
Jaclyn Ong
Recently, I was faced with the task of importing 9,000+ products from several warehouses and 6,000+ product images into Liferay Commerce. The upsert code itself to import the data was fairly straightforward, but the challenge was the sheer number of
... [More]
records inserted initially. The import was bound to be long-running, no doubt, and was going to run in the middle of the night anyway, but I wanted to find any way I could to "trim the fat." Let me share with you a few tips I learned from others and my own experience that helped me slim down the import.
A Little Bit of Background
Because importing each product into the Liferay Commerce catalog involves more than just 1 table, it was going to be 1,000s of records for each table. I couldn't really use some batch update tool that updates records in bulk or ActionableDynamicQuery (which doesn't do inserts, only updates, as far as I know), so I called the Liferay Commerce API which took care of inserting/updating all the right entities for me. It was an upsert-only operation, no deletions, to avoid the risk of the wrong data getting deleted.
Importing the 6,000+ images from the warehouse was going to take the longest the first time they are ever downloaded, but, after that, they wouldn't need to be updated every time. Hot linking is a way to avoid the import altogether, but that's often banned by sites for their own good.
(In case you're wondering, Talend ETL jobs were an option, but the decision ended up being writing our own import.)
Shorter Subsequent Imports
The first time the import is run should be the longest, and the subsequent imports should be shorter. (This tip doesn't apply if all the data changes frequently and must be updated every time the import is run.) See if you can find which data does not need to be updated. Some data doesn't change frequently, like an image of a product, so it doesn't need to be updated every day, for instance. Even if it does eventually need to be updated, you could rig it so that it forces it to update, but it's great if you can keep the import shorter most of the time.
Small Transactions
Running the entire import in one big transaction leads to timeout errors. Use a single transaction for each entity (in this case, product) that you're inserting.
Small Memory Footprint in the Code
You can make the memory footprint smaller by doing the following:
Avoid using local reference variables as much as possible. You can use inline method calls instead of local variables. Move object creation into methods or any that are already available (like Collections.singletonMap).
Avoid creating the same objects over and over again inside loops when the same instance can be used in the loop.
Avoid String.format (use simple + or StringBundler instead which can be better performing).
Troubleshooting a Subset of Data
Let's face it: troubleshooting a long-running import is laborious. Sometimes, when troubleshooting certain issues, I wanted to run the import over only a small subset of the data. One way to do this is to make the import take in a start and end index so that you can run it over a range, say, only the first 50 records. I added a few System Settings for the start and end indexes to configure the range. The caveat to configuring a range like this is that I was depending on the data to be in a certain order. The order of the product data from the 3rd party warehouse APIs (since there was one API for the catalog and one for the inventory of the products in the catalog) happened to come back from the warehouse in the same order every time. So the product that appeared first from the catalog API was also the first to appear in the inventory API.
Before these changes, the import of just the products (not including the images) took 30 minutes or so at minimum and ended up timing out. After these changes, the product import was around 10 minutes, not lightning fast, but much better. The main change that helped was using a single transaction per upsert.
These are just a few tips I learned, and I hope they help someone else facing the same issue. I'd be happy to hear any tips or experiences you have! [Less]
|
|
Posted
over 6 years
ago
by
Yanan Yuan
Liferay IntelliJ 1.5.1 plugin has been made available. Head over to this page for downloading.
Release Highlights:
Update embedded blade to 3.7.3
Add enable target platform in new liferay gradle workspace wizard
Add support for watching
... [More]
again after stop watching on a project
Better support for JSP editor
Bug fix for project dependencies
Some screenshots
Enable target platform option in new liferay gradle workspace wizard
About JSP editor
Special Thanks
Thanks to Dominik Marks for the improvements.
Feedback
If you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (INTELLIJ project), we are always around try to help you out. Good luck!
[Less]
|
|
Posted
over 6 years
ago
by
Yanan Yuan
Head over to this page for downloading Liferay IntelliJ 1.5.0 plugin.
Release Highlights:
Add support for development on Liferay 7.2
Update embedded blade to 3.7.0
Add index sources option in new Liferay Workspace wizard
Add inspections for
... [More]
service.xml editor
Some screenshots
Index sources in new Liferay Wroskapce wizard
By default, "index sources" is not checked. And you can find the "target.platform.index.sources" property has been set to false in gradle.properties.
Inspections for service.xml
Special Thanks
Thanks to Dominik Marks for the improvements.
Feedback
If you run into any issues or have any suggestion, please come find us on our community forums or report them on JIRA (INTELLIJ project), we are always around try to help you out. Good luck!
[Less]
|
|
Posted
over 6 years
ago
by
Yanan Yuan
New Liferay Project SDK Installers and Developer Studio 3.6.1 ga2 has been made available today. This new package is based on eclipse 2018-12 and support for eclipse Photon or greater.
Download:
Customer downloads:
Download all of them on the
... [More]
customer studio download page.
Community downloads:
https://liferay.dev/project/-/asset_publisher/TyF2HQPLV1b5/content/ide-installation-instructions
Release highlights:
Installers Improvements:
Bundled with Liferay Portal 7.2.0 GA1 in LiferayProjectSDK installers
Development Improvements:
Updated gradle plugin buildship to the latest 3.1.1
Improved deployment support for Liferay 7.x
Integration of Blade CLI 3.7.3
Support for Liferay Workspace Gradle 2.0.4
Better support for wizards
new liferay module project
new liferay module ext project
new liferay workspace project
Adapt editor in module projects
service builder editor
liferay hook editor
Improvements on upgrade planner
Miscellaneous bug fixes
About Installers
About upgrade planner
About wizards
About editors
Note: Liferay hook editor requires an existing liferay server.
Feedback
If you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (IDE project), we are always around try to help you out. Good luck! [Less]
|
|
Posted
over 6 years
ago
by
Yanan Yuan
The new release of Liferay Project SDK and Studio Installers 3.6.0 ga1 has been made available.
Download:
For customers, they can download all of them on the customer studio download page. This new package support for eclipse Photon or greater.
... [More]
Release highlights:
New Liferay Upgrade Planner (replacement of code upgrade tool)
New Liferay Modules Ext Project
Better deployment support for Liferay 7
Liferay 72 Tomcat and Wildfly
of latest Blade CLI 3.7.0
latest Liferay Workspace 2.0.3
Remove support for
Miscellaneous bug fixes
New Liferay Modules Ext Project wizard
New Liferay Upgrade Plan
Creating a new upgrade plan requires internet connection. Check the upgrade planner document for more details.
To start a new upgrade plan:
Click on Project > New Liferay Upgrade Plan…
Open Liferay Upgrade Planner perspective
Click on New Upgrade Plan shortcut
Double click according to the prompt message
Some screenshots:
Feedback
If you run into any issues or have any suggestions, please come find us on our community forums or report them on JIRA (IDE project), we are always around try to help you out. Good luck! [Less]
|