|
Posted
over 6 years
ago
by
David H Nebinger
Recently when I was working on my custom Headless API blog series, I ran into a bit of trouble with my Service Builder-based persistence tier.
My SB code was done and working, and I was adding methods to my CLI tool to test all of the Headless
... [More]
methods.
I had the list working, I could add, update and patch Vitamins, and I just finished the delete method testing and I was on cloud nine...
Everything was working! Great! I took one more stroll through the CLI to see that the commands were going to work after adding some shortcuts...
I went through the following sequence:
Add a vitamin.
Add a vitamin.
List.
Add a vitamin.
Add a vitamin.
List.
Patch a vitamin.
Put a vitamin.
List.
Delete a vitamin.
List.
Wham! No entity found with primary key 105!
What was going on? I mean, I'm just doing a listing, how does a listing result in an exception for the "No entity found with primary key"?
So I thought back to the section of the blog for building the Listing method: https://liferay.dev/blogs/-/blogs/creating-headless-apis-part-4#implementing-getvitaminspage
I remembered how the listing was actually going to be doing an index search, and the last function reference passed to the Liferay implementation was a function to use the primary key to lookup the PersistedVitamin object.
So basically I had done a delete but then the list used the index which must have returned a document for the deleted vitamin, the function failed to find a value and the exception was thrown.
This meant that my delete was removing the record from the database, but it was not updating the index even though I thought my annotations were correct.
And (with some help from friends) I realized I had broken the rules for the @Indexable annotation...
So I thought I'd do a quick post to refresh us on the rules so maybe I'll remember not to break them again...
Rule #1 - Methods Must Return an Entity
This one may sound kind of weird, but it is important. The Liferay code that applies the indexing AOP wrapper comes from the IndexableAdvice class, and this class checks the method return type to ensure it is returning something that extends BaseModel (all entities extend BaseModel).
So you can't define your method like:
@Indexable(type=IndexableType.DELETE)
public void deleteVitamin(PersistedVitamin vitamin) {...}
Even though you might not need or want the deleted entity, without returning the entity the AOP aspect won't wrap the method and wouldn't delete the document from the index when the entity is deleted.
You might ask why this is? By returning the deleted entry, the wrapping AOP logic can get the entity id and use that to find the appropriate document to remove from the index. If the entity is not returned, it doesn't have visibility on the entity to remove from the index.
Rule #2 - Method Entry Points Must Be Hit
So this rule is based on how transactions are applied to Service Builder methods...
Transactions are applied on the entry point into your SB layers, so you'll get a read/write or a read-only transaction on the main entry point and this will be inherited by most other internal method calls.
So if you called your SB method "getAndDeleteEntity()", this gets wrapped in a read-only transaction (because of the "get" prefix) and any deletions occurring inside of the method would get lost.
Additionally, because of how AOP would wrap the service entry point, not the actual method in the XxxLocalServiceImpl class itself, you may need to go through an injected XxxLocalService to hit the method, just to make sure the AOP aspect is applied on exit from the actual call.
So, for example, my PersistedVitaminLocalServiceImpl can have a deleteVitamin() method where I might want to do things like drop resource actions, etc. But if I end the method with:
return super.deletePersistedVitamin(vitamin);
My entity would be deleted, but the AOP aspects are not wrapped around the "super" object, they are wrapped around the actual service instance.
I have the actual local service instance @Referenced in for my by the superclass, so if I change my ending line to:
return persistedVitaminLocalService.deletePersistedVitamin(vitamin);
This time since I'm going through the actual service, the AOP aspect has wrapped the service and my entity will be deleted and the aspect used to delete the entity's document from the index.
Conclusion
Well, those are the two rules. If you follow the rules, your index updating annotations will apply correctly.
I know I focused here on the delete type, but the same rules apply to the update type used on add and update methods in the service tier. [Less]
|
|
Posted
over 6 years
ago
by
David H Nebinger
Introduction
I've recently started working on a React SPA to take advantage of the Liferay Headless APIs. I was working through all of my implementation details and was finally ready to start making API calls, but I needed to figure out how to
... [More]
handle authenticated requests.
I reached the following point in the documentation, https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/making-authenticated-requests#oauth-20-authentication and I went ahead and implemented the client credentials authorization flow and was happily retrieving web contents when a thought struck me...
What if I wanted to author a web content article?
I quickly realized that the Client Credentials authorization flow is not going to be the best type in all cases. I also didn't find any guidance in the documentation how to pick the right authorization flow, so I thought I'd pen a quick blog to help you choose the best option for you.
OAuth 2.0 Authorization Flows
Liferay supports four different authorization flows:
Authorization Code Flow
PKCE Extended Authorization Code Flow
Client Credentials Authorization Flow
Resource Owners Authorization Flow
Each of these authorization flows are different, but they all have the same result: they return an Access [Bearer] Token. This token gets submitted with each headless API request (or /api/jsonws request or classic REST request) and will be used to allow access to the API endpoints.
In each of the flows, you will be using your registered client ID (you get a client ID and sometimes a client secret code when you register your application in the OAuth 2 Administration control panel) as part of the request parameters when asking for the Access Token.
Let's take a look at each of the flows and identify their pros and cons and recommended use cases.
Resource Owners Authorization Flow
The Resource Owners Authorization Flow is an infrequently used flow and not suggested at all for SPAs.
With the Resource Owners flow for getting an access token include passing the username and password in clear text with the token request. Liferay shares the following example for an access token request using the Resource Owners flow:
https://[hostname]/o/oauth2/token
?grant_type=password
&client_id=[client ID]
&client_secret=[client secret]
&username=[[email protected]]
&password=[password]
Although I broke this up across multiple lines, the request itself would be one single URL.
This request will give you an access token, but as you can see it leaks your username and password in the URL itself. That's way too insecure for a SPA to expose yours or anyone's credentials.
Pros
Cons
Easy to see who is authenticating.
Exposes username and password in cleartext.
Single request to receive an access token.
Even if using HTTPS, the URL itself is not protected by SSL. It truly is cleartext.
The user passed is the user authenticated on Liferay's side.
May require user interaction to collect user credentials.
No backend interaction for authorized access.
When is this this authorization flow a good choice? Never, if you ask me. The cleartext exposure of username and password is at too great a risk of being intercepted and used in ways you would never approve.
I guess if you had a secured app, well away from public access, inside of your organization but protected by layers of firewalls and security to prevent hacker access, maybe you might be safe leveraging this kind of authorization flow, but generally I can't see this being appropriate for any public use, especially for a SPA.
Client Credentials Authorization Flow
Although this authorization flow is also infrequently used, it is the flow suggested in Liferay's documentation introducing using the new Headless APIs covered here: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/making-authenticated-requests#obtaining-the-oauth-20-token
Liferay's example for the Client Credentials flow is:
https://[hostname]/o/oauth2/token
?grant_type=client_credentials
&client_id=[client ID]
&client_secret=[client secret]
Pros
Cons
Simplest request w/o leaking details.
Cannot represent different users.
Permissions totally controlled by server side.
Cannot represent different access levels.
No backend interaction for authorized access.
The key part of the client credentials flow is that the credentials to use are determined and set by the application owner, the person that registers the application in the OAuth 2 Administration control panel.
When defining an application that uses the client credentials flow, the admin will select the user that anyone using the client ID and secret will be impersonating. You can select a system admin (not recommended) down to simple guest access.
So this selection is key; you want to pick a user to impersonate that has the necessary access to Liferay services, but no more than what the application is going to need. Anyone with the client id and secret can request a token and then start calling headless APIs, /api/jsonws or classic REST APIs. So you want to ensure that the selected user doesn't have access to anything outside of what the SPA application needs in case you lose control of the client IDs/secrets.
When is this type of authorization flow useful? I would say if you are doing read-only access to portions of Liferay, this flow is for you. There's no need to gather user credentials, no need for interacting with the backend for authorization, and you can grab a token and start using it for read-only requests.
It is not going to be good, however, for data creation, update or deletion, for auditing purposes (who is viewing what), or supporting different levels of access depending upon user privileges. The user that is impersonated as (designated in the OAuth 2 Admin control panel entry for the app), that is the access the user has, any creation/change/delete can only be stamped with that user, and for audit purposes it will appear like this user is doing everything (because every incoming API request is impersonating that user).
But if you have a custom SPA with your own logic, using your own datastore (outside of Liferay), and the only things you want to do is pull in content from Liferay to display in your SPA? I think client credentials will be a super easy and effective way to do this, but you must take care when selecting the user to impersonate.
Authorization Code Flow
Authorization code flow is one of most frequently used methods for OAuth 2, especially in web applications.
This flow operates in two steps. The first step is the request for authorization. The Liferay example URL for this is:
https://[hostname]/o/oauth2/authorize
?response_type=code
&client_id=[client ID]
The twist is that this is not sent as a background request, this is a redirect sent to the OAuth 2 provider. For Liferay, the user is redirected to a login page and after that the user will see a dialog requesting authorization for the application (as entered in the OAuth 2 Admin control panel).
If the user authorizes the app, the browser is redirected back to the outside app (the redirect URL is entered in the OAuth 2 Admin control panel) and includes a code generated by the server.
The application can then issue a POST request for an access token, including the code, with a URL similar to Liferay's example:
http://localhost:8080/o/oauth2/token
No URL parameters with this one, instead the request body will be x-www-form-urlencoded with the following parameters:
client_id=[client ID]
client_secret=[client secret]
grant_type=authorization_code
code=[authorization server generated code]
redirect_uri=[registered callback URI]
Liferay will generate an access token and return it in the response body for the submission.
Along with the access token, you'll also get a refresh token. The refresh token can be used after the access token expires, to request a new access token without going through the full authorization process again.
Pros
Cons
No need to capture or know user credentials.
Redirects to Liferay for authorization dialog can drop the context in a SPA.
Access token is user specific, so APIs will have access to the real user as well as the permissions the user has in Liferay.
Client ID can be sniffed as part of the auth request.
Can refresh an access token without redirecting to Liferay for authorization.
Must persist the refresh token where the application can use it later, typically in local storage.
SPAs would need to leverage popup windows so main application can stay in the browser and retain context.
Few library choices to help with the dialog and auth process.
For my React SPA, I needed a popup window to do the Liferay authorization in. I ended up adapting https://github.com/Ramshackle-Jamathon/react-oauth-popup to handle the popup, it worked quite well (I added PKCE support covered in the next section).
While this is an effective system to handle authorization, it does expose the client ID and can potentially be used by another application to get an access token. If you are leaning towards implementing the Authorization Code Flow, I'd encourage you to take one step farther and implement the next flow.
PKCE Extended Authorization Code Flow
PKCE (pronounced "Pixie") is an acronym of Proof Key of Code Exchange. PCKE follows the same steps as the Authorization Code Flow but with the following changes:
For the /o/oauth2/authorize request, an additional value is passed in as the code_challenge parameter. This is a value that is passed through a one-way hash. The algorithm, shared below, will compute a value that is passed as the code_challenge value for verification in the next step.
For the /o/oauth2/token request for the access token, the pre-hashed value is sent as the code_verifier parameter.
The OAuth 2 provider will verify that the code_verifier code can be passed through the known hash to become the code_challenge provided in the authorize request.
Because of the hash value comparison, this flow helps to protect the client id and access token from misuse by bad actors.
Otherwise, the same steps from the Authorization Code Flow apply. The authorize request will require a login and an authorization dialog from Liferay. The received access token will come with a refresh token that can be used to get a new access token in the future without going through the authorization dialog again.
The PKCE flow is best for applications that may not be able to guarantee the security of the Client ID for the application.
Pros
Cons
No need to capture or know user credentials.
Redirects to Liferay for authorization dialog can drop the context in a SPA.
Access token is user specific, so APIs will have access to the real user as well as the permissions the user has in Liferay.
Must persist the refresh token where the application can use it later, typically in local storage.
Can refresh an access token without redirecting to Liferay for authorization.
SPAs would need to leverage popup windows so main application can stay in the browser and retain context.
Protects Client ID by requiring a code verifier and challenge.
Few library choices to help with the dialog and auth process.
Creating the Code Verifier and Code Challenge Values
PKCE requires two pieces of data: a code verifier and a code challenge.
The code verifier value should be a random string using alphanumeric characters (plus the period, the dash, the underscore and the tilde characters) anywhere from 43 to 128 characters in length.
The code challenge is the base 64 encoded, SHA-256 hash of the code verifier value.
Code I used in my React application to create a code verifier value:
const S256_CHARS = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyz-._~"];
this.codeVerifier = [...Array(43+(Math.random()*85|0))]
.map(i=>S256_CHARS[Math.random()*S256_CHARS.length|0]).join``;
The code I used to compute the code challenge value:
const hash = crypto.createHash('sha256').update(this.codeVerifier).digest();
const code_challenge = base64url.encode(hash);
For the code challenge value, I used the following NPM packages:
"base64url": "^3.0.1",
"crypto": "^1.0.1",
Conclusion
So, where does that leave us?
I think you have two basic options to look at:
Client Credentials Flow - This one is great if you only need read-only access to Liferay APIs and don't care to audit what is being retrieved. It is a non-intrusive flow that any SPA can easily use to get an access token and retrieve records from Liferay.
PKCE Extended Authorization Code Flow - This one covers the cases where read-only don't apply, cases where you want to create, update or delete data in Liferay, cases where you want users to be represented by their own credentials and their own permissions applied to APIs they are invoking, and even cases where you want to track what the users are retrieving, this will be the flow for you. It is a tiny bit more on top of the Authorization Code Flow, but the extra verification is another layer meant to protect the application and the client ID.
For the other flows, Resource Owner exposes credentials and should immediately be disqualified. Authorization Code Flow is good, but with a few minor additional steps later allow you to implement PKCE, so why stop inches before the goal?
Hopefully this will make the job of choosing the right authorization flow easier. [Less]
|
|
Posted
over 6 years
ago
by
David H Nebinger
Introduction
I've recently started working on a React SPA to take advantage of the Liferay Headless APIs. I was working through all of my implementation details and was finally ready to start making API calls, but I needed to figure out how to
... [More]
handle authenticated requests.
I reached the following point in the documentation, https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/making-authenticated-requests#oauth-20-authentication and I went ahead and implemented the client credentials authorization flow and was happily retrieving web contents when a thought struck me...
What if I wanted to author a web content article?
I quickly realized that the Client Credentials authorization flow is not going to be the best type in all cases. I also didn't find any guidance in the documentation how to pick the right authorization flow, so I thought I'd pen a quick blog to help you choose the best option for you.
OAuth 2.0 Authorization Flows
Liferay supports four different authorization flows:
Authorization Code Flow
PKCE Extended Authorization Code Flow
Client Credentials Authorization Flow
Resource Owners Authorization Flow
Each of these authorization flows are different, but they all have the same result: they return an Access [Bearer] Token. This token gets submitted with each headless API request (or /api/jsonws request or classic REST request) and will be used to allow access to the API endpoints.
In each of the flows, you will be using your registered client ID (you get a client ID and sometimes a client secret code when you register your application in the OAuth 2 Administration control panel) as part of the request parameters when asking for the Access Token.
Let's take a look at each of the flows and identify their pros and cons and recommended use cases.
Resource Owners Authorization Flow
The Resource Owners Authorization Flow is an infrequently used flow and not suggested at all for SPAs.
With the Resource Owners flow for getting an access token include passing the username and password in clear text with the token request. Liferay shares the following example for an access token request using the Resource Owners flow:
https://[hostname]/o/oauth2/token
?grant_type=password
&client_id=[client ID]
&client_secret=[client secret]
&username=[[email protected]]
&password=[password]
Although I broke this up across multiple lines, the request itself would be one single URL.
This request will give you an access token, but as you can see it leaks your username and password in the URL itself. That's way too insecure for a SPA to expose yours or anyone's credentials.
Pros
Cons
Easy to see who is authenticating.
Exposes username and password in cleartext.
Single request to receive an access token.
Even if using HTTPS, the URL itself is not protected by SSL. It truly is cleartext.
The user passed is the user authenticated on Liferay's side.
May require user interaction to collect user credentials.
No backend interaction for authorized access.
When is this this authorization flow a good choice? Never, if you ask me. The cleartext exposure of username and password is at too great a risk of being intercepted and used in ways you would never approve.
I guess if you had a secured app, well away from public access, inside of your organization but protected by layers of firewalls and security to prevent hacker access, maybe you might be safe leveraging this kind of authorization flow, but generally I can't see this being appropriate for any public use, especially for a SPA.
Client Credentials Authorization Flow
Although this authorization flow is also infrequently used, it is the flow suggested in Liferay's documentation introducing using the new Headless APIs covered here: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/making-authenticated-requests#obtaining-the-oauth-20-token
Liferay's example for the Client Credentials flow is:
https://[hostname]/o/oauth2/token
?grant_type=client_credentials
&client_id=[client ID]
&client_secret=[client secret]
Pros
Cons
Simplest request w/o leaking details.
Cannot represent different users.
Permissions totally controlled by server side.
Cannot represent different access levels.
No backend interaction for authorized access.
The key part of the client credentials flow is that the credentials to use are determined and set by the application owner, the person that registers the application in the OAuth 2 Administration control panel.
In my React SPA, I'm using Axios and QS to make this call to retrieve the access token; the method I use is:
export const requestClientCredentialsAccessToken = (baseUrl, clientId, secret) => {
const params = {
client_id: clientId,
client_secret: secret,
grant_type: 'client_credentials',
};
return axios.post(`${baseUrl}/o/oauth2/token`, qs.stringify(params), {
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
},
});
};
When defining an application that uses the client credentials flow, the admin will select the user that anyone using the client ID and secret will be impersonating. You can select a system admin (not recommended) down to simple guest access.
So this selection is key; you want to pick a user to impersonate that has the necessary access to Liferay services, but no more than what the application is going to need. Anyone with the client id and secret can request a token and then start calling headless APIs, /api/jsonws or classic REST APIs. So you want to ensure that the selected user doesn't have access to anything outside of what the SPA application needs in case you lose control of the client IDs/secrets.
When is this type of authorization flow useful? I would say if you are doing read-only access to portions of Liferay, this flow is for you. There's no need to gather user credentials, no need for interacting with the backend for authorization, and you can grab a token and start using it for read-only requests.
It is not going to be good, however, for data creation, update or deletion, for auditing purposes (who is viewing what), or supporting different levels of access depending upon user privileges. The user that is impersonated as (designated in the OAuth 2 Admin control panel entry for the app), that is the access the user has, any creation/change/delete can only be stamped with that user, and for audit purposes it will appear like this user is doing everything (because every incoming API request is impersonating that user).
But if you have a custom SPA with your own logic, using your own datastore (outside of Liferay), and the only things you want to do is pull in content from Liferay to display in your SPA? I think client credentials will be a super easy and effective way to do this, but you must take care when selecting the user to impersonate.
Authorization Code Flow
Authorization code flow is one of most frequently used methods for OAuth 2, especially in web applications.
This flow operates in two steps. The first step is the request for authorization. The Liferay example URL for this is:
https://[hostname]/o/oauth2/authorize
?response_type=code
&client_id=[client ID]
The twist is that this is not sent as a background request, this is a redirect sent to the OAuth 2 provider. For Liferay, the user is redirected to a login page and after that the user will see a dialog requesting authorization for the application (as entered in the OAuth 2 Admin control panel).
If the user authorizes the app, the browser is redirected back to the outside app (the redirect URL is entered in the OAuth 2 Admin control panel) and includes a code generated by the server.
The application can then issue a POST request for an access token, including the code, with a URL similar to Liferay's example:
http://localhost:8080/o/oauth2/token
No URL parameters with this one, instead the request body will be x-www-form-urlencoded with the following parameters:
client_id=[client ID]
client_secret=[client secret]
grant_type=authorization_code
code=[authorization server generated code]
redirect_uri=[registered callback URI]
Liferay will generate an access token and return it in the response body for the submission.
Along with the access token, you'll also get a refresh token. The refresh token can be used after the access token expires, to request a new access token without going through the full authorization process again.
Pros
Cons
No need to capture or know user credentials.
Redirects to Liferay for authorization dialog can drop the context in a SPA.
Access token is user specific, so APIs will have access to the real user as well as the permissions the user has in Liferay.
Client ID can be sniffed as part of the auth request.
Can refresh an access token without redirecting to Liferay for authorization.
Must persist the refresh token where the application can use it later, typically in local storage.
SPAs would need to leverage popup windows so main application can stay in the browser and retain context.
Few library choices to help with the dialog and auth process.
For my React SPA, I needed a popup window to do the Liferay authorization in. I ended up adapting https://github.com/Ramshackle-Jamathon/react-oauth-popup to handle the popup, it worked quite well (I added PKCE support covered in the next section).
While this is an effective system to handle authorization, it does expose the client ID and can potentially be used by another application to get an access token. If you are leaning towards implementing the Authorization Code Flow, I'd encourage you to take one step farther and implement the next flow.
PKCE Extended Authorization Code Flow
PKCE (pronounced "Pixie") is an acronym of Proof Key of Code Exchange. PCKE follows the same steps as the Authorization Code Flow but with the following changes:
For the /o/oauth2/authorize request, an additional value is passed in as the code_challenge parameter. This is a value that is passed through a one-way hash. The algorithm, shared below, will compute a value that is passed as the code_challenge value for verification in the next step.
For the /o/oauth2/token request for the access token, the pre-hashed value is sent as the code_verifier parameter.
The OAuth 2 provider will verify that the code_verifier code can be passed through the known hash to become the code_challenge provided in the authorize request.
Because of the hash value comparison, this flow helps to protect the client id and access token from misuse by bad actors.
Otherwise, the same steps from the Authorization Code Flow apply. The authorize request will require a login and an authorization dialog from Liferay. The received access token will come with a refresh token that can be used to get a new access token in the future without going through the authorization dialog again.
The PKCE flow is best for applications that may not be able to guarantee the security of the Client Secret for the application.
Pros
Cons
No need to capture or know user credentials.
Redirects to Liferay for authorization dialog can drop the context in a SPA.
Access token is user specific, so APIs will have access to the real user as well as the permissions the user has in Liferay.
Must persist the refresh token where the application can use it later, typically in local storage.
Can refresh an access token without redirecting to Liferay for authorization.
SPAs would need to leverage popup windows so main application can stay in the browser and retain context.
Protects Client ID by requiring a code verifier and challenge.
Few library choices to help with the dialog and auth process.
Creating the Code Verifier and Code Challenge Values
PKCE requires two pieces of data: a code verifier and a code challenge.
The code verifier value should be a random string using alphanumeric characters (plus the period, the dash, the underscore and the tilde characters) anywhere from 43 to 128 characters in length.
The code challenge is the base 64 encoded, SHA-256 hash of the code verifier value.
Code I used in my React application to create a code verifier value:
const S256_CHARS = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyz-._~"];
this.codeVerifier = [...Array(43+(Math.random()*85|0))]
.map(i=>S256_CHARS[Math.random()*S256_CHARS.length|0]).join``;
The code I used to compute the code challenge value:
const hash = crypto.createHash('sha256').update(this.codeVerifier).digest();
const code_challenge = base64url.encode(hash);
For the code challenge value, I used the following NPM packages:
"base64url": "^3.0.1",
"crypto": "^1.0.1",
Conclusion
So, where does that leave us?
I think you have two basic options to look at:
Client Credentials Flow - This one is great if you only need read-only access to Liferay APIs and don't care to audit what is being retrieved. It is a non-intrusive flow that any SPA can easily use to get an access token and retrieve records from Liferay.
PKCE Extended Authorization Code Flow - This one covers the cases where read-only don't apply, cases where you want to create, update or delete data in Liferay, cases where you want users to be represented by their own credentials and their own permissions applied to APIs they are invoking, and even cases where you want to track what the users are retrieving, this will be the flow for you. It is a tiny bit more on top of the Authorization Code Flow, but the extra verification is another layer meant to protect the application and the client ID.
For the other flows, Resource Owner exposes credentials and should immediately be disqualified. Authorization Code Flow is good, but with a few minor additional steps later allow you to implement PKCE, so why stop inches before the goal?
Hopefully this will make the job of choosing the right authorization flow easier. [Less]
|
|
Posted
over 6 years
ago
by
Andrew Jardine
It's been about ten years now, so if that isn't a sign of dedication to a product, then I don't know what is. Last year at DevCON in Amsterdam I shared the first part of my story, The Journey of a Liferay Developer: The Search for Answers -- how I
... [More]
discovered the platform, the challenges I faced learning it, and the road I walked to get to where I am today (or rather where I was in October 2018). Mastering Liferay is the next leg of my journey and this post is intended in part to share this new site with the community, but also to provide some back story on how we got here. If you don't care about the story, then you can just go to the site .. BUT! I like telling stories, so I think you should read on (or at least come back!) :)
Origin Story
I've spent pretty much my entire career helping others realize their visions by turning their dreams into bits and bytes we call software. This time is different however, for the first time in my career, I didn't just build the product, I own it. The fear and trepidation of pushing the button and making it live? Today, this fear was mine to own and this time I had to tell myself, "Stop being a baby and PUSH THE DAMN BUTTON!"
Like many of your I am sure, I've worked with A LOT of different tech over the years. The one staple in my toolbox though? the one tool I come back to as often as I can? that product is Liferay Portal. For me, it's such a beautiful product, capable of solving a litany of common business problems, right out of the box. In fact, there are so many features that come ready to use that I find myself writing less and less custom code for my clients -- which means delivering faster, and saving them money. For those cases where something custom is required, Liferay, gives me first class tools, all while handling much of the complexity to make development fast and fun. I find myself saying more and more -- "All you need is a little imagination", right? Well, that, and some understanding on how to work with Liferay of course. But that isn't a limitation of Liferay. Your success with a table saw can also be tied to experience. Understanding the species of wood you are working with, the type of blade installed on the saw, or even how to setup the cut -- these are all factors that determine the outcome and how successful it is.
Experience
I've spent the better part of a decade now working with clients that have Liferay as part of their tech stack. I've helped many organizations architect and build their Liferay solutions, but a big part of what I do is also helping them understand how to best leverage their investment in Liferay. It usually starts by exposing product features, or revealing secrets of Liferay's API. With that said, it almost always morphs into coaching and mentoring their developers, helping their teams strengthen their understanding of Liferay, and helping them see Liferay as more than just a content management system; it's a DEVELOPMENT PLATFORM. Helping other developers? Showing people the awesomeness of Liferay? That's the best part of my day.
Community Driven
What seems like years ago now, my friend Julien and I were sitting around talking about the challenges we, and others, face with Liferay. Whether it's the whole product, or just one small feature, first experiences are so important in shaping opinions. So, how can we show more people just how awesome Liferay is and create that positive first experience in those critical first few hours of discovery? What is it that stops developers from understanding the power of any product? We believe that frustration is the great inhibitor. Mix that in with the pressure to deliver and you have the perfect recipe for a bad day. Initial frustration almost always stems from a lack of understanding, and not necessarily because it's "complicated", but, more likely, because it's NEW. We felt, alleviating the frustration was the key to success. But how do you stop something from being new? Well, you can't. But, you CAN limit the feeling of helplessness that results from a lack of knowledge by providing someone with clear, direct answers and solid examples.
This conversation took place almost four years ago now and the landscape has definitely changed since then. The amount of information on how to use Liferay now has vastly improved. The documentation is better (kudos Cody and his team), you have great community blog posts (hats off to David Nebinger as always), and you have Liferay University (thanks in no small part to the resident dean, Olaf). Four years ago though, most of what you have today was absent, so Julien and I decided that THIS was a problem we could, and wanted, to help solve.
The Journey of a Liferay Developer: Part 2 - A Source of Truth
Julien and I embarked on a journey of our own. We had a few false starts, and a couple of reboots a long the way but a little more than a year ago we decided to put all our chips on the table and focus on this project. We had a handful of goals --
1. Make it widely available to reach as many people as possible
2. Use a format where information was easy to digest
3. Accelerate development and enhance the learning experience with tools designed to make it easier
4. Make it free
5. Build it using Liferay, of course :)
I'm proud to announce that we're FINALLY launching our site - a video based training and tutorials platform, built on top of Liferay, for Liferay and it's community of developers and users. I know it may not sound very original, but we feel there is a differentiation. We like to call our video format "tactical based training". I know -- it sounds very James Bond-ish right? All jokes aside, we use this terminology because of the way we produce our content. Our goal is to record videos that distill content down to a very specific task that you might need to perform and forego the rhetoric of surrounding features that are not important for what you are trying to accomplish. The workflow is simple and probably similar to the way you already work:
1. Search
2. Watch a 10 minute video
3. Code, using our example as a starter
4. Move to the next requirement and repeat
Everyone knows that content is king, and our catalog is growing quickly, with new videos and topics all the time -- which means, what is not there today, might well be there tomorrow. We also welcome suggestions or requests for videos from our members -- we want to build the content YOU want to watch, not the content we THINK you should watch. Our vision is to have thousands of videos, each cataloging how to do one small part of a larger solution. Yes, you read that right, thousands. It's an ambitious goal, but one we are keen to achieve.
Best part? most of this tool? in the spirit of open source? is FREE.
Ladies and gentlemen, introducing a year of blood, sweat and tears. I give you ...
[Less]
|
|
Posted
over 6 years
ago
by
Christoph Rabel
This post assumes that you are at least familiar with Webpack. If not, you might find this post helpful. It explains first why we stopped developing portlets and use a javascript frontend + rest services approach. It doesn’t cover the rest backend at
... [More]
all. Then it explains why we use Webpack as a task runner and compare it a bit to the Liferay NPM Bundler.
A little and simple demo application is presented afterwards. It shows some basic use of Web Components and how to lazy load applications using javascript on demand.
Motivation
Let me start with a few little preamble to explain our approach to developing applications.
The days when websites were static entities and all the magic happened on the server are long gone. Nowadays Javascript is an integral and huge part of the web. Customers expect and demand more than ever before, the web applications of today have to be as powerful as desktop applications, with excellent usability on phone and desktop. Click and wait is a no-go.
Developers and web designers are challenged to meet and exceed expectations. We need to develop faster, more beautiful applications, with better usability, WCAG compliant, secure and of course: With smaller budgets.
For those reasons, my team and I decided several years ago that we needed to change how we worked. While we cannot fulfill all these requirements all the time (especially the budget one), we still saw room for improvement.
Rest backend and Javascript frontend
We decided to modularize the backend and to use mainly rest service endpoints. Kinda the microservice approach, but with Liferay as a platform. We fiddled a bit with that in Liferay 6.2 and had a hard time with it, with 7.0 it became easy and with 7.2 implementing the Whiteboard specification it became a blast.
With that came the need to use modern frontend libraries/frameworks in the browser. To do that you need to have a build and development process that supports that. We decided to use Webpack as our main build tool and module bundler. We use it to transpile our ECMAScript code to support Internet Explorer 11. The scss files are compiled, autoprefixed and minified.
After some consideration we decided to use Vue.js for our components, but we are not religious about it.
We mostly have stopped writing portlets at all and when we do, they are usually just shells adding some html to a page. The most useful thing about portlets is the configuration page.
We found that Web content and Freemarker templates are very convenient. Web content structures allow us to add some content, like headline and description and even some configuration. The templates write the necessary html tags to the page, the javascript application renders itself into them and often uses rest calls to fetch further data. With that, any Webcontent can become an application.
With 7.2 we plan to replace that with the new Fragments (awesome new feature!), but we are not there yet.
Webpack vs. the Liferay NPM Bundler
While this blog post covers the Webpack approach, there is an alternative. Liferay has created their own npm bundler and improved it last year. It does things a bit differently, but it is a worthwhile alternative and quite impressive.
I’d like to outline why we still prefer to bundle our stuff using Webpack.
Webpack is currently the most successful and popular frontend module bundler and for good reason. It has lots of users, a ton of documentation and also lots of contributors who improve it all the time. It is also used in other departments in our company. If you are stuck, chances are high that you find a solution.
The development proxy is a real gem that allows hot reloading of changes. Instead of writing code and deploying it, you just edit the code/scss and when you save, the page changes instantly. For those who have never seen this, here is a small animation showing the effect.
I just save and the content in the browser changes an instant later. Whenever a file in the source folder is changed, the relevant file is transpiled and the change is transmitted to the browser. This really speeds up development. If necessary we can even proxy the productive system and debug issues there. This can be quite useful when an issue can only be seen in production.
Using the development proxy also has the advantage that the frontend developer only needs to have limited knowledge of Liferay. He starts Liferay, then starts the dev server and starts working on the frontend. He doesn’t need to know about portlet actions and all those pesky details a portlet brings with it.
The frontend code is actually quite backend agnostic.It depends on the theme styling and usually a few other things, but in general, it would also work with a few changes on any html page.
But there is also one disadvantage of our approach.
Webpack works best when it processes all the javascript code of a webpage. Then it can optimize the result and discard everything that is not needed/used (a process called tree shaking).
This basically leads to one limitation: All frontend code needs to be compiled in a single Webpack build. While it would be possible to work around this limitation to some degree, it is really how Webpack is supposed to be used. It’s part of the concept.
The simplest solution we found was to put (nearly) all our javascript and css into a single frontend loader project. We pondered some other schemes but adding a subfolder per application works quite well so far.
A change to the application usually means to deploy the relevant rest service(s) and to deploy the whole frontend module. The rest services are individual modules and can be deployed extra, but the frontend is one package. So far, this wasn’t a big deal for us. But maybe you feel different.
The Liferay npm bundler takes a different approach and does better here.
It allows to put the javascript code into distinct modules. The modules are more independent than in our approach. Obviously, since the modules are all independent, it can’t optimize that well. But that’s a minor thing.
The main downside here is that it works only in Liferay, it is not widely used, so there isn’t a ton of information out there. In general, Webpack has more features, most notably hot reloading as shown above.
We like Webpack, but maybe you like the Liferay approach better.
An example application
Now, let’s dive into it.
I have prepared a little example project on github. I have tested it with Liferay 7.2 and Firefox/Chrome. It should work with all browsers and Internet Explorer 11, but since I didn’t test it, I can’t guarantee it.
The example assumes that you have three pages each containing a simple application:
Hello World
Today
Shuffled Words
The idea here is to show how multiple applications could be put into a single build and dynamically loaded on demand. Only code that is used/needed, should be loaded. Some code needs to be loaded at once, but it is pretty small. If we didn’t need to support Internet Explorer, the generated javascript code would be even smaller (The difference in the example is about 120 KB).
To test the example, you can do it online here. Please note that this page is based on Liferay 7.0. As you can see, the javascript code is pretty independent of the Liferay version.
If you don’t want to build it yourself and just try it, you can download the module here. Please note that it was built for Liferay 7.2 and won’t deploy in other versions.
If you want to build it yourself, please clone the repository, build the frontend-loader module and deploy it in Liferay 7.2. Please read the build instructions in the Readme. To make it work in other 7.x version, you need to change the dependency version in the build.gradle file
Whether you deploy the jar file or build it yourself, you need to use the following steps to test the code.
Create three pages: Hello World, Today and Words.
Create three webcontents with the following content
Put them on the respective pages.
Attention: You need to switch to the source code view! Otherwise it will be added as text and not as an html tag.
Hello World
Today
Shuffled Words
Just an html tag. That’s all. Ok, shuffle expects a list of words as “configuration”, but that’s actually to show how you would add configuration to such a tag.
Now you should see “Hello World!” on the first page, what day it is on the second and “Liferay is simply great!” shuffled on the third.
If you open the developer tools in the browser and monitor the network traffic on the pages, you will notice that lodash and date-fns are not loaded on the “Hello World” page. They are only loaded on the pages that need them. Please note that some .js file is loaded too. This is the actual component with its dependencies. Webpack automatically names the dynamic imports and choses a number for them.
Anyway: The really important thing is that everything is loaded on demand. This makes pages really fast.
app.js
App.js is the entry point of the application. In it we define one Webcomponent per application. When a fitting html tag (e.g. ) “appears” on the page (is added to the DOM), the connectedCallback method is executed. In that method we simply call the import function of Webpack and it loads our application and all of its dependencies.
Today depends on the date-fns library, Shuffled Words depends on the lodash library. Those libraries are only loaded when the page contains the respective html tags.
// We create a Webcomponent called today
class Today extends HTMLElement {
connectedCallback() {
// Lazy load the actual implementation
import('./components/Today/today.js').then(module => {
// Call the default method of our application
module.default(this);
}).catch(error => console.error('An error occurred while loading the component `Today`', error));
}
}
// Now we link our component to the html tag
register('app-today', Today);
Today.js
// We import date-fns to showcase that date-fns is loaded dynamically when we load the component.
import { format } from 'date-fns';
// We export the initialization function as default
// It expects a html element as parameter and simply inserts some text into it
export default function showToday(element) {
const today = new Date();
element.innerHTML = 'Today is a ' + format(today, 'dddd');
}
The package.json
The purpose of this file is similar to the build.gradle files. It contains a list of dependencies and several build targets. When you build the module for deployment, the “prod” task is executed. It optimizes and minifies the output for production environment.
Since the minified code is quite ghastly to read, there is also the option to do a development build and also to start the dev server by starting it “hot”. Using hot, a watchdog process is started that recompiles the relevant code on every save. A dev server is started on port 3000 and it proxies all requests, except for our application, to the port 8080 where Liferay runs.
If you have built, deployed and configured it yourself, you can try it out now yourself. Open a command line window, switch to the module folder “dccs-loader-demo” and type “npm run hot”.
If it doesn’t open a window by itself, go to “http://localhost:3000” in the browser. You should see your Liferay page since the dev server proxies it. It also intercepts requests for our js/css files and replaces them with development versions.
Now, open “hello.js” in any browser, change the text and save. It should behave as in the video above. With this method, developing a javascript application becomes really fun. Compare it to “blade gw deploy” and you will certainly notice the difference ...
The Webpack config
While it might feel a bit overwhelming if you have never used it before, it isn’t too complicated. Webpack expects a json with lots of configuration options as a source. If you don’t like the defaults, you need to overwrite them. There is a ton of resources on the Internet describing how such a config works.
I’d like to mention just a few specialities. First of all, with Webpack 4 the CommonsChunkPlugin was replaced by the SplitChunks plugin. It’s more powerful. Uglify is gone too, TerserPlugin is the new kid on the block. If you search for resources on the internet, you will find configs for Webpack 1,2,3. Be careful, some of the configurations might not work anymore or work differently in Webpack 4.
I have added quite a few more comments than we normally add, I hope they help you understand the settings and the intention behind them. [Less]
|
|
Posted
over 6 years
ago
by
Christoph Rabel
This post assumes that you are at least familiar with Webpack. If not, you might find this post helpful. It explains first why we stopped developing portlets and use a javascript frontend + rest services approach. It doesn’t cover the rest backend at
... [More]
all. Then it explains why we use Webpack as a task runner and compare it a bit to the Liferay NPM Bundler.
A little and simple demo application is presented afterwards. It shows some basic use of Web Components and how to lazy load applications using javascript on demand.
Motivation
Let me start with a few little preamble to explain our approach to developing applications.
The days when websites were static entities and all the magic happened on the server are long gone. Nowadays Javascript is an integral and huge part of the web. Customers expect and demand more than ever before, the web applications of today have to be as powerful as desktop applications, with excellent usability on phone and desktop. Click and wait is a no-go.
Developers and web designers are challenged to meet and exceed expectations. We need to develop faster, more beautiful applications, with better usability, WCAG compliant, secure and of course: With smaller budgets.
For those reasons, my team and I decided several years ago that we needed to change how we worked. While we cannot fulfill all these requirements all the time (especially the budget one), we still saw room for improvement.
Rest backend and Javascript frontend
We decided to modularize the backend and to use mainly rest service endpoints. Kinda the microservice approach, but with Liferay as a platform. We fiddled a bit with that in Liferay 6.2 and had a hard time with it, with 7.0 it became easy and with 7.2 implementing the Whiteboard specification it became a blast.
With that came the need to use modern frontend libraries/frameworks in the browser. To do that you need to have a build and development process that supports that. We decided to use Webpack as our main build tool and module bundler. We use it to transpile our ECMAScript code to support Internet Explorer 11. The scss files are compiled, autoprefixed and minified.
After some consideration we decided to use Vue.js for our components, but we are not religious about it.
We mostly have stopped writing portlets at all and when we do, they are usually just shells adding some html to a page. The most useful thing about portlets is the configuration page.
We found that Web content and Freemarker templates are very convenient. Web content structures allow us to add some content, like headline and description and even some configuration. The templates write the necessary html tags to the page, the javascript application renders itself into them and often uses rest calls to fetch further data. With that, any Webcontent can become an application.
With 7.2 we plan to replace that with the new Fragments (awesome new feature!), but we are not there yet.
Webpack vs. the Liferay NPM Bundler
While this blog post covers the Webpack approach, there is an alternative. Liferay has created their own npm bundler and improved it last year. It does things a bit differently, but it is a worthwhile alternative and quite impressive.
I’d like to outline why we still prefer to bundle our stuff using Webpack.
Webpack is currently the most successful and popular frontend module bundler and for good reason. It has lots of users, a ton of documentation and also lots of contributors who improve it all the time. It is also used in other departments in our company. If you are stuck, chances are high that you find a solution.
The development proxy is a real gem that allows hot reloading of changes. Instead of writing code and deploying it, you just edit the code/scss and when you save, the page changes instantly. For those who have never seen this, here is a small animation showing the effect.
Link to video
(Should be embedded)
I just save and the content in the browser changes an instant later. Whenever a file in the source folder is changed, the relevant file is transpiled and the change is transmitted to the browser. This really speeds up development. If necessary we can even proxy the productive system and debug issues there. This can be quite useful when an issue can only be seen in production.
Using the development proxy also has the advantage that the frontend developer only needs to have limited knowledge of Liferay. He starts Liferay, then starts the dev server and starts working on the frontend. He doesn’t need to know about portlet actions and all those pesky details a portlet brings with it.
The frontend code is actually quite backend agnostic.It depends on the theme styling and usually a few other things, but in general, it would also work with a few changes on any html page.
But there is also one disadvantage of our approach.
Webpack works best when it processes all the javascript code of a webpage. Then it can optimize the result and discard everything that is not needed/used (a process called tree shaking).
This basically leads to one limitation: All frontend code needs to be compiled in a single Webpack build. While it would be possible to work around this limitation to some degree, it is really how Webpack is supposed to be used. It’s part of the concept.
The simplest solution we found was to put (nearly) all our javascript and css into a single frontend loader project. We pondered some other schemes but adding a subfolder per application works quite well so far.
A change to the application usually means to deploy the relevant rest service(s) and to deploy the whole frontend module. The rest services are individual modules and can be deployed extra, but the frontend is one package. So far, this wasn’t a big deal for us. But maybe you feel different.
The Liferay npm bundler takes a different approach and does better here.
It allows to put the javascript code into distinct modules. The modules are more independent than in our approach. Obviously, since the modules are all independent, it can’t optimize that well. But that’s a minor thing.
The main downside here is that it works only in Liferay, it is not widely used, so there isn’t a ton of information out there. In general, Webpack has more features, most notably hot reloading as shown above.
We like Webpack, but maybe you like the Liferay approach better.
An example application
Now, let’s dive into it.
I have prepared a little example project on github. I have tested it with Liferay 7.2 and Firefox/Chrome. It should work with all browsers and Internet Explorer 11, but since I didn’t test it, I can’t guarantee it.
The example assumes that you have three pages each containing a simple application:
Hello World
Today
Shuffled Words
The idea here is to show how multiple applications could be put into a single build and dynamically loaded on demand. Only code that is used/needed, should be loaded. Some code needs to be loaded at once, but it is pretty small. If we didn’t need to support Internet Explorer, the generated javascript code would be even smaller (The difference in the example is about 120 KB).
To test the example, you can do it online here. Please note that this page is based on Liferay 7.0. As you can see, the javascript code is pretty independent of the Liferay version.
If you don’t want to build it yourself and just try it, you can download the module here. Please note that it was built for Liferay 7.2 and won’t deploy in other versions.
If you want to build it yourself, please clone the repository, build the frontend-loader module and deploy it in Liferay 7.2. Please read the build instructions in the Readme. To make it work in other 7.x version, you need to change the dependency version in the build.gradle file
Whether you deploy the jar file or build it yourself, you need to use the following steps to test the code.
Create three pages: Hello World, Today and Words.
Create three webcontents with the following content
Put them on the respective pages.
Attention: You need to switch to the source code view! Otherwise it will be added as text and not as an html tag.
Hello World
Today
Shuffled Words
Just an html tag. That’s all. Ok, shuffle expects a list of words as “configuration”, but that’s actually to show how you would add configuration to such a tag.
Now you should see “Hello World!” on the first page, what day it is on the second and “Liferay is simply great!” shuffled on the third.
If you open the developer tools in the browser and monitor the network traffic on the pages, you will notice that lodash and date-fns are not loaded on the “Hello World” page. They are only loaded on the pages that need them. Please note that some .js file is loaded too. This is the actual component with its dependencies. Webpack automatically names the dynamic imports and choses a number for them.
Anyway: The really important thing is that everything is loaded on demand. This makes pages really fast.
app.js
App.js is the entry point of the application. In it we define one Webcomponent per application. When a fitting html tag (e.g. ) “appears” on the page (is added to the DOM), the connectedCallback method is executed. In that method we simply call the import function of Webpack and it loads our application and all of its dependencies.
Today depends on the date-fns library, Shuffled Words depends on the lodash library. Those libraries are only loaded when the page contains the respective html tags.
// We create a Webcomponent called today
class Today extends HTMLElement {
connectedCallback() {
// Lazy load the actual implementation
import('./components/Today/today.js').then(module => {
// Call the default method of our application
module.default(this);
}).catch(error => console.error('An error occurred while loading the component `Today`', error));
}
}
// Now we link our component to the html tag
register('app-today', Today);
Today.js
// We import date-fns to showcase that date-fns is loaded dynamically when we load the component.
import { format } from 'date-fns';
// We export the initialization function as default
// It expects a html element as parameter and simply inserts some text into it
export default function showToday(element) {
const today = new Date();
element.innerHTML = 'Today is a ' + format(today, 'dddd');
}
The package.json
The purpose of this file is similar to the build.gradle files. It contains a list of dependencies and several build targets. When you build the module for deployment, the “prod” task is executed. It optimizes and minifies the output for production environment.
Since the minified code is quite ghastly to read, there is also the option to do a development build and also to start the dev server by starting it “hot”. Using hot, a watchdog process is started that recompiles the relevant code on every save. A dev server is started on port 3000 and it proxies all requests, except for our application, to the port 8080 where Liferay runs.
If you have built, deployed and configured it yourself, you can try it out now yourself. Open a command line window, switch to the module folder “dccs-loader-demo” and type “npm run hot”.
If it doesn’t open a window by itself, go to “http://localhost:3000” in the browser. You should see your Liferay page since the dev server proxies it. It also intercepts requests for our js/css files and replaces them with development versions.
Now, open “hello.js” in any browser, change the text and save. It should behave as in the video above. With this method, developing a javascript application becomes really fun. Compare it to “blade gw deploy” and you will certainly notice the difference ...
The Webpack config
While it might feel a bit overwhelming if you have never used it before, it isn’t too complicated. Webpack expects a json with lots of configuration options as a source. If you don’t like the defaults, you need to overwrite them. There is a ton of resources on the Internet describing how such a config works.
I’d like to mention just a few specialities. First of all, with Webpack 4 the CommonsChunkPlugin was replaced by the SplitChunks plugin. It’s more powerful. Uglify is gone too, TerserPlugin is the new kid on the block. If you search for resources on the internet, you will find configs for Webpack 1,2,3. Be careful, some of the configurations might not work anymore or work differently in Webpack 4.
I have added quite a few more comments than we normally add, I hope they help you understand the settings and the intention behind them. [Less]
|
|
Posted
over 6 years
ago
by
David H Nebinger
If you’ve spent time rummaging around Liferay’s search and indexing documentation provided here, you’ll find a lot of details about document contributors, index writers, search registrars, etc.
The part that might be missing is what all of these
... [More]
things actually do, why they are important, and why you actually want to go down the road of supporting indexing and search for your custom entities.
In this blog entry, I'm going to break everything down and hopefully clarify why things are in the Liferay documentation samples and hopefully, by the end of the post, you'll have the knowledge you need to get your index and search needs done right the first time.
But first let's understand why Liferay is even using an external search index in the first place.
What Search Solves
The reason Liferay maintains a separate search index from the data store is because some things are either just really hard and sometimes not possible (at least in a practical sense) in a standard Relational Database like Oracle or MySQL.
The search index is used to match documents on keywords or phrases regardless of the “column” the data might be from.
If you imagine a table in a database with 5 columns with large text blocks, so maybe a product name, a description, installation instructions, recycling options and the sales brochure content.
If you wanted to search for a phrase such as “keyless entry” but you wanted to match on any of the 5 columns, you end up with something like:
SELECT * FROM mytable WHERE
(prod_name LIKE ‘%keyless entry%’) OR
(description LIKE ‘%keyless entry%’) OR
(install_instr LIKE ‘%keyless entry%’) OR
(recycling LIKE ‘%keyless entry%’) OR
(brochure LIKE ‘%keyless entry%’)
This is already pretty ugly, but now what happens if you want to search for the keywords “keyless” or “entry”? Your query starts to become more unmaintainable as you add relatively simple additional criteria into the mix.
And if you have multiple tables in your database and you want to join results of matches amongst more than one table? Your query misery has just been increased astronomically!
For certain types of queries, they can simply become too unwieldy or inefficient and in some cases impossible to do in a regular SQL-92 database.
Search Index to the Rescue
The search index solves this problem because search occurs over a Document, not over a table column.
Yes, I know that you can control the Fields included or excluded from the search; we're just focusing on theory at this point...
In search index parlance, each record from our table(s) will become a Document in the search index. The Document can have multiple Fields which may come straight from the table columns or they might be manufactured values (turning numerical codes into their string labels) or they may contain values from Parent/Child table relations to include necessary child data into the Document.
When a search for a phrase or for keywords is performed, the search index will search for Documents that match, regardless of the Fields the matches might come from. This way, as new keywords or Fields or Documents are added, the complexity of the query remains unchanged.
For the multiple table scenario, the records from the different tables are included into the same search index. Common Fields like NAME and DESCRIPTION would be reused across the different Document types so searching for “keyless entry” in a NAME Field would yield results from all tables that had a corresponding match.
For those Fields that are unique to the table, they can still be added for indexing, but when searching the query may need to be modified to include the additional Fields.
The Developer Perspective
To get back to the developer perspective, your goal in all of this is to get your entities into the search index such that when a user does a search in Liferay, your entities can be found and matched upon in the same way that a Liferay entity would be.
This is where the Liferay documentation will start to apply…
When Liferay is documenting how to contribute model entity fields into the index, they are describing what is necessary to get the fields from your entity into the index so they can be matched during a search.
When Liferay is documenting how to configure re-indexing and batch indexing behavior, they’re showing what you will be doing to ensure that your custom entities are also re-indexed when the Liferay Admin wants to reindex everything.
When Liferay is documenting how to add your model entity’s terms to the query, they’re showing how to add any additional Fields you might have defined for your custom entity to the search query so those custom Fields can be checked.
When Liferay is documenting how to pre-filter search results, they’re providing you a way to exclude matches from the results to prevent records from getting through that you don’t want included.
When Liferay is documenting how to create a results summary, they’re providing you a way to control the generated summary for your entity that the user will see in the search results.
And finally, when Liferay is documenting how to register your search services, they’re showing how all of these pieces you’ve generated will be made available to the search and indexing infrastructure to ensure they all get picked up.
Indexing and Search Customizations
Next we’ll get into each of the extension points and go into details to use for understanding how to build your own customizations.
Contribute Model Entity Fields into the Index
Liferay Documentation: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/indexing-model-entities#contributing-model-entity-fields-to-the-index
When you have a custom Service Builder entity (or really any entity you want to search), one of the most important things you need to do is actually contribute Fields into the Document for your entity.
A ModelDocumentContributor is the class that will help get your entity’s columns mapped into Fields in the Document to be indexed.
Remember that the columns from your entity are not just going to be stored in the index on their own.
Every entity that requires indexing will need an instance of this class. If your entity is not directly indexed, you won’t need this class because you’re just not going to index it.
When adding Fields to the Document for your entity, keep these things in mind:
You don’t need to have all of your entity’s fields in the Document as Fields; you only need the ones that a keyword search should match on. The Document will always have your primary key value, so when your entity is a search result (AKA a Hit), you can always retrieve your entity.
Try to use constant values from the com.liferay.portal.kernel.search.Field class for your Field names, where they make sense. If your entity has a name, use Field.NAME. If your entity has a description, use Field.DESCRIPTION. Using the constants will reuse Fields in the Document that Liferay will already know how to include in a search query so your customization effort is reduced.
Don’t use the constant values for something they’re not. If your entity has an array of chemical names, for example, don’t concatenate them together and store as the Field.CAPTION type because they simply aren’t a caption. It is okay to come up with your own Field names.
Understand addText() vs addKeyword(). Both of these methods are overloaded and allow for text or keyword addition of many different types, but the search index will handle them very differently.
There are numerous other add methods for different data types such as addDate(), addNumber(), addGeolocation(), etc. Don’t coerce all of your data into Strings as this can throw off your search results (you wouldn’t want a search for “19” returning every record for the years 19xx and 2019, for example).
Fields do not have to be exact copies of the entity data; they are often better if they represent normalized data instead of the entity values. For example, you might have a clientId in your entity to point off to a different client record; for indexing, you will have better results if your Field is for the actual client name (from the client record) instead of (or in addition to) storing the clientId from the entity.
Fields can be created for data not part of the entity, so an entity with 5 fields could be represented in a Document with 20 Fields if it makes sense to have them as matching targets.
When adding support for filtering and/or sorting, the fields to filter or sort on must be added as Fields in the Document; you can’t sort on a field from the entity, for example, because the search includes only Hits from the index, not from an additional search of the database.
When handling localizable text, index the text in all languages. The Liferay documentation shows how to handle adding localized Fields by using specially crafted Field names. When a search is performed for a specific locale, the matching Fields can be used so the correct results will be returned and exclude false positives that could arise from an indirect match from another language.
Be sure to include all Fields that will later be used in a ModelSummaryContributor implementation (below). When building the Summary, you don’t want to have to fetch the Entity directly to get additional info to include in the Summary.
Text vs Keyword
It can be confusing when faced with the choice of choosing to add a Field as text vs adding as keyword. There is one significant difference that separates these two concepts: whether the full text is included or whether only keywords are included.
When storing as text, a phrase such as “The quick brown fox jumps over the lazy dog.” will be stored as-is in the Field. This is useful when you expect to face searches over phrases like “quick brown fox” or “lazy dog”. Because the full text is intact, only those records that include the matching phrase will be Hits.
With keyword storage, common words and duplicated words are removed from the text.
Common words, or in indexing they are known as Stop Words are words that are common in language but provide no value from an index perspective. This would include words like “the, this, a, that, those, he, him, her,” etc. From the phrase above, the removal of the stop words would index “quick brown fox jumps over lazy dog”.
Additionally duplicated words would also be removed, but count of occurrence would remain. In this blog, for example, I must have used the word Field at least 50 times so far. In keyword storage, Field would be included and 50 for the count, but all form of sentence structure or placement within the text is lost.
Of course, the actual storage of the keyword-based fields will be up to the search appliance, whether it is Solr or Elasticsearch. While they may handle things in a different way that what is described here, from a coding perspective it will be easier just to imagine this is how they do it.
Storing as keywords helps to reduce storage size and is good for keyword matching, but it is not useful for phrase searches such as “keyless entry” since the phrase is not retained in this storage method.
Since the occurrence count is retained, a search for “keyless” would be able to rank Hits higher with a higher occurrence count vs other Documents that used the word but sporadically. A keyword storage tends to be faster than raw text storage for keyword searches.
You might choose to store your text as two Fields, one using text and the other using keyword. The first would favor phrase searches and the other keyword searches.
Configure Reindexing and Batch Indexing Behavior
Liferay Documentation: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/indexing-model-entities#configure-re-indexing-and-batch-indexing-behavior
When an administrator uses the Reindex option in the control panel, a batch reindexing process is kicked off. For a given entity, this typically means each record in the table will be retrieved, the Document is populated using the registered ModelDocumentContributor and the Document is sent to the indexing service (typically Elasticsearch) for storage.
Since this is the normal flow, you might be asking why you need to customize this process?
Many times you will actually want every record to be indexed, but it is also common to have records that should automatically be excluded from indexing.
For example, JournalArticles are obviously indexed, but only articles that are in the workflow status of APPROVED or IN_TRASH. Any that are PENDING, DENIED, etc are excluded from indexing. Depending upon your own perspective, you might think that indexing IN_TRASH entities doesn’t make sense, so you might want to exclude them. For some users, you might want to include PENDING in the index so they might see pending articles in their search results to see just how they would rank for typical searches once approved.
Every decision here is not going to be the same for all environments, for all developers. Context behind what your needs and requirements will determine which documents should be indexed and which ones should be excluded.
Rather than trying to avoid creating a Document using the ModelDocumentContributor to prevent indexing of articles in these states, a ModelIndexerWriterContributor is created to exclude these records from being processed in the first place.
This ends up being a much better process as it saves network and database bandwidth (by not retrieving records that won’t be indexed) and processing time (time wasted trying to create a document for a record that shouldn’t be indexed).
Every entity that is going to be indexed needs an instance of this class.
At the very least, most of the code from the Liferay documentation can be used as is. The only change is to the customize() method, the minimal implementation of the customize() method is going to be:
@Override
public void customize(
BatchIndexingActionable batchIndexingActionable,
ModelIndexerWriterDocumentHelper modelIndexerWriterDocumentHelper) {
batchIndexingActionable.setPerformActionMethod(
(FooEntry fooEntry) -> {
Document document =
modelIndexerWriterDocumentHelper.getDocument(fooEntry);
batchIndexingActionable.addDocuments(document);
});
}
This version does not filter any records from the table and would reindex every row.
To learn how to exclude entities/rows from being indexed, the Liferay documentation provides sample code demonstrating how to write the customize() method, but here’s some additional details that will help you decide how to implement yours:
The BatchIndexingActionable is a wrapper around a DynamicQuery. Anything you can do in a DynamicQuery, you can add to your BatchIndexingActionable instance.
Goal should be to exclude records you know should not be indexed. So this might be determined by workflow status or even your own status codes. You might want to exclude older records to prevent search Hits on them without actually deleting from the system.
The content for the batchIndexingActionable.setPerformActionMethod() in the example code is what you’ll use 99% of the time (modifying for your own entity class).
The getIndexerWriterMode() method is normally going to return IndexerWriterMode.UPDATE. The other options are used to “clean up” a record that might have been left behind previously but might need to be removed.
Adding Your Model Entity’s Terms to the Query
Liferay Documentation: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/searching-the-index-for-model-entities#adding-your-model-entitys-terms-to-the-query
This is a sister class to your ModelDocumentContributor class. In this class you’re going to be adding Fields that you defined in your Document instance to the search context query helper to facilitate keyword searches on the fields.
Not all entities will need the KeywordQueryContributor implementation; only those that need to add Fields to an in-flight search query that were added by the entity’s ModelDocumentContributor.
This is kind of an important aspect - there is already a search being started and your class needs to add Fields for the keyword search.
So you may not want to add every Field that you did in the ModelDocumentContributor, but you absolutely want to add those that should be included in the keyword search.
In Liferay’s example code, the FooEntryModelDocumentContributor added two date Fields, a simple text subtitle Field and localized Fields for the content and the title.
In the corresponding FooEntryKeywordQueryContributor, only the subtitle, title and content Fields were added to the query; the two date Fields were not because they are not really subject to a Keyword search.
Likewise you may have other Fields that you add in your own ModelDocumentContributor that you may or may not want to include in the KeywordQueryContributor; just note that those you include will be searched, while those you exclude will not be searched.
Pre-Filtering
Liferay Documentation: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/searching-the-index-for-model-entities#pre-filtering
Pre-filtering is a method to exclude Hits from the returned search results. You may not have a need to do this kind of thing, but the option is there.
In a previous example where it was suggested that PENDING JournalArticles might get indexed so content approvers might see the pending articles in the search results? In that situation, you would not want everyone to see PENDING articles.
Through a custom ModelPreFilterContributor implementation, you could add a role-specific filter to exclude PENDING articles from normal users and only include them for content approvers.
Not all entities will need an implementation of ModelPreFilterContributor - only in cases where some instances of your entity should not be included as Hits under specific circumstances will this be necessary.
Creating a Results Summary
Liferay Documentation: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/returning-results#creating-a-results-summary
Every Hit (search result) will be displayed in the search results portlet in Liferay. You have control over the summary content that is displayed in the search result using your ModelSummaryContributor.
The Liferay sample is demonstrative of setting a summary for the matched entity. Remember that both the Content and Title Fields are localized; the implementation provided exposes the Field naming used for Localized fields, but in the end the localized title and content are extracted from the Document and used in the creation of the Summary class.
If you’re not using localized fields, your Summary creation will be simpler than the provided sample.
Although it is not highlighted in the Liferay example, you should try to use the Document Fields when creating the Summary instance. If you have to do a DB query to fetch your entity for something to complete the Summary, you will be facing a performance hit. It is recommended that all values you need or want in the Summary should be added as Fields in the Document to avoid the DB query.
Every entity which can be returned as a Hit (search result) should implement a ModelSummaryContributor class.
Controlling the Visibility of Model Entities
Liferay Documentation: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/returning-results#controlling-the-visibility-of-model-entities
This will likely be a rarely used extension point. In cases where one Entity can have Related Assets, the ModelVisibilityContributor determines whether the entity can be selected as a Related Asset or not.
For example, Web Contents can have a DlDocument as a Related Asset; when creating a Web Content, the user can do a search to find documents that can be added as a Related Asset.
The ModelVisibilityContributor can be used to prevent your entity/entities from being available for selection.
In the sample Liferay implementation in the documentation, it masks FooEntry instances that are not in the right workflow status.
Not all entities will require an instance of the ModelVisibilityController; only those that can be related to another asset and want some control over whether an instance is available or not will implement one of these classes.
Search Service Registration
Liferay Documentation: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/search-service-registration#search-service-registration
The last piece of the custom index/search implementation is your search service registrar.
Although all of the classes previously discussed are implementations of Liferay interfaces to support indexing and search, and although they are all registered as OSGi components, they will not be registered into the Liferay Search Registry.
It is through the Search Registry that Liferay finds all of the necessary pieces when dealing with indexing or searching, so our last piece to implement is the SearchRegistrar to finish wiring everything up.
The Liferay implementation uses a regular @Component with an @Activate method to trigger the search registration process. This will get invoked as soon as the @Referenced services are wired in.
You’ll want to add @Reference dependencies for all of the indexing and search classes you created as part of this document.
In the sample code, the registration sets the following in the modelSearchDefinition:
The default selected Field names are the default list of Fields that are selected; the list shown is mostly the standard, but they included the MODIFIED_DATE (the FooEntityModelDocumentContributor sets this Field value).
Default selected Localized fields; since the FooEntity has localized TITLE and CONTENT, these are added as default selected fields (you may or may not have any of these).
Sets the ModelIndexWriteContributor, the one for the FooEntity.
Sets the ModelSummaryContributor, the one for the FooEntity.
Sets the ModelVisibilityContributor, the one for the FooEntity (you may or may not have one of these).
Additionally there is an @Deactivate method that unregisters everything from the Search Registry when the module is unloaded.
Every entity that is being indexed/searched will register its classes in this same way.
It is recommended that each Entity has its own SearchRegistrar implementation, but this is not a requirement. While you could have a single SearchRegistrar that took care of registering all of the classes for all of the entities, there would be too high a chance of a single missing @Referenced component blocking the registration of all of the entities search classes. For this reason, it is recommended that each entity have a separate Registrar so a missing component would only block the entity that is missing the dependency.
Conclusion
Well there it is. That's all I know about building out the custom index/search code. I needed all of this for a new blog project implementation, so I figured by dumping it here, when it comes time to check out the other big project you'll know more about why I made certain decisions in the implementation.
In the mean time, I also think it can be important details that will make it easier to understand how to handle your own entity index/search needs. [Less]
|
|
Posted
over 6 years
ago
by
David H Nebinger
Introduction
Welcome back to my series on using Liferay's REST Builder tool to generate your own Headless APIs!
In part 1 of the series, we created a new project and modules, and we started to create the OpenAPI Yaml file defining our headless
... [More]
services by specifying the Reusable Components section.
In part 2 of the series, we completed the OpenAPI Yaml file by adding in our paths, working through common issues and generated code using REST Builder.
In part 3 we reviewed all of the generated code to understand what had been built for us and touched on where we would be adding our implementation code.
In this part, we're going to create a ServiceBuilder (SB) layer we'll need for persisting the values, paying close attention to those pieces we need to implement specifically to support the headless API.
Note: You don't really need to use Service Builder. You are free to go your own way with the persistence aspect (if one is necessary, after all). Some things may be harder for you to implement (i.e. returning Paged lists, applying the search/filter/sort, etc). Doesn't mean it isn't possible, it just means you'll need to do all of the heavy lifting that, had you used Service Builder, would practically be taken care of for you.
Creating the Service Builder Layer
We're going to use Service Builder for our persistence layer. I'm not going to get into all of the details about how to do this, but I will highlight those things we're adding in order to facilitate the headless API.
The most complicated aspect of the service portion is what would seem to be the easiest - the /vitamins path to get all Vitamin components.
Why is this so hard? Well, we're following the Liferay model so we need to be able to:
Support search, this is done via indexing, so our SB entity must be indexed.
Support permissions since the new search implementation is permission aware by default.
Support sorting of the results determined by the caller.
Filtering results as defined using special strings defined here: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/filter-sort-and-search#filter
Support pagination of results, but with the page size determined by the caller.
Remote Services so the permission checker is invoked at the right points.
In order to make all of this happen, we need to ensure that our entity is indexed. Find out how to do that here: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/model-entity-indexing-framework
With the new indexing being permissions-aware by default, we also need to add permissions to our entities per: https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/defining-application-permissions
Because I called my component Vitamin, I didn't want my Service Builder code to also use Vitamin, otherwise I'd have to include package everywhere. Instead I opted to call my entity PersistedVitamin. This should help distinguish between the DTO class that Headless is using and my actual persisted entity that is managed by Service Builder.
Supporting List Filter, Search and Sort
The rest of this section covers adding support for list filtering, searching and sorting using Liferay supported mechanisms. If you are not going to support list filtering, searching or sorting, or if you are planning to support one or more of them but not using Liferay techniques, this section might not apply to you.
In many of Liferay's list methods such as /v1.0/message-board-threads/{messageBoardThreadId}/message-board-messages, there are additional attributes that you can provide in the query to support search, filter, sort, paging and field restrictions...
All of the Liferay documentation on these aspects are covered in the doco:
https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/pagination
https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/filter-sort-and-search
https://portal.liferay.dev/docs/7-2/frameworks/-/knowledge_base/f/restrict-properties
The part that it doesn't really share is that filter, sort and search all require the use of the search index for the entities.
Search, for example, is performed by adding one or more keywords to the query. These feed into the index query to find matches on your entities.
Filtering is also managed by adjusting the index search query. To filter on one or more fields in your component, those fields need to be in the search index. Additionally you'll need the OData EntityModel for the fields that we'll cover in a different section below.
Sorting is also managed by adjusting the index search query. To sort on one or more fields in your component, those fields need to be in the search index. Additionally, they should be indexed using the addKeywordSortable() methods from the com.liferay.portal.kernel.search.Document interface. Sortable fields will also need to be added to the OData EntityModel implementation we'll cover soon.
Keeping this in mind, you're going to want to pay special attention to your search definitions for your custom entities:
Use your ModelDocumentContributor to add important text and/or keywords to get appropriate search hits.
Use your ModelDocumentContributor to add fields that you want to support filtering on.
Use your ModelDocumentContributor to add the sortable keyword fields that you want to sort on.
Implementing the VitaminResourceImpl Methods
Once you have a Service Builder layer and fix up the headless-vitamins-impl dependencies, the next step is to actually start implementing the methods...
Implementing deleteVitamin()
Let's start with an easy one, the deleteVitamin() method. In VitaminResourceImpl we're going to extend the method from the base class (the one with all of the annotations, remember?) and then invoke our service layer:
@Override
public void deleteVitamin(@NotNull String vitaminId) throws Exception {
// super easy case, just pass through to the service layer.
_persistedVitaminService.deletePersistedVitamin(vitaminId);
}
Really easy, isn't it?
So I'm going to recommend that you use only your remote services to handle entity persistence, not the local services.
Why? Well, it is really your last line of defense to ensure that a user has permission to do something like delete a vitamin record.
Sure, you can exercise control using OAuth2 scopes to block activity, but do you really want to depend upon an admin getting the OAuth2 scope configurations correct? Heck, even when I'm my own admin, I don't trust that I'll get the scopes right every time...
By using the remote services w/ the permission checks, I won't have to worry about the scopes being correct... If an admin (me) screws up the OAuth2 scopes, the remote services will still block the operation if the user does not have the right permissions.
Handling Conversions
Before we can get further into some of our implementation methods, we have to talk about conversions from our backend ServiceBuilder entities into the headless Components that we're going to be returning.
At the current point in time, Liferay has not really settled on a standard for dealing with entity -> component conversion. The headless-delivery-impl module from the Liferay source does conversion one way, but the headless-admin-user-impl module handles the conversion in a different way.
Because of the simplicity, I'm going to present a method here based on the headless-admin-user-impl technique. You may have a technique that works better for you that is different than this one, or you might favor the headless-delivery-impl method. And Liferay could come out with a standard way to support conversion in the next release which might make all of this moot.
I guess I'm saying that you need to handle conversion, but you're not locked into a particular way. Liferay might come out with something better, but it will be up to you to adapt to the new way or run with what you have working.
So, we need to be able to convert from a PersistedVitamin to a Vitamin component to return as part of our headless API definition. We'll create a _toVitamin() method in the VitaminResourceImpl class:
protected Vitamin _toVitamin(PersistedVitamin pv) throws Exception {
return new Vitamin() {{
creator = CreatorUtil.toCreator(_portal, _userLocalService.getUser(pv.getUserId()));
articleId = pv.getArticleId();
group = pv.getGroupName();
description = pv.getDescription();
id = pv.getSurrogateId();
name = pv.getName();
type = _toVitaminType(pv.getType());
attributes = ListUtil.toArray(pv.getAttributes(), VALUE_ACCESSOR);
chemicalNames = ListUtil.toArray(pv.getChemicalNames(), VALUE_ACCESSOR);
properties = ListUtil.toArray(pv.getProperties(), VALUE_ACCESSOR);
risks = ListUtil.toArray(pv.getRisks(), VALUE_ACCESSOR);
symptoms = ListUtil.toArray(pv.getSymptoms(), VALUE_ACCESSOR);
}};
}
So first off, I have to apologize for using the double brace instantiation... I too see it as an anti-pattern (https://blog.jooq.org/2014/12/08/dont-be-clever-the-double-curly-braces-anti-pattern/), but my goal was to follow "the Liferay way" as laid out in the headless-admin-user-impl module, and that was the pattern Liferay used. Since Liferay doesn't use the Builder pattern often, I think the double brace instantiation is being used as a substitute.
Given my own preference, I would follow the Builder pattern or even a Fluent pattern to simplify object population. After all, Intellij will easily create Builder classes for me (you do know it is capable of doing that, right?).
The method relies on an external CreatorUtil class (that I copied from Liferay's code), a _toVitaminType() method that converts from an internal integer code to the component's enum, and a VALUE_ACCESSOR that handles the internal objects that are part of the implementation details into a String array thanks to ListUtil's toArray() method.
Long story short, this method can handle the conversion that we need to perform in our actual method implementations.
Implementing getVitamin()
Let's look at another easy one, the getVitamin() method, the one that will return a single entity given the vitaminId:
@Override
public Vitamin getVitamin(@NotNull String vitaminId) throws Exception {
// fetch the entity class...
PersistedVitamin pv = _persistedVitaminService.getPersistedVitamin(vitaminId);
return _toVitamin(pv);
}
Here we retrieve the PersistedVitamin instance from the service layer, but then we pass the retrieved object to _toVitamin() method for conversion.
Implementing postVitamin(), patchVitamin() and putVitamin()
Since we've seen the pattern above, I'm lumping these together...
postVitamin() is the method for the POST on /vitamins and represents creating a new entity.
patchVitamin() is the method for the PATCH on /vitamins/{vitaminId} and represents patching an existing entity (only changing values given in the incoming object, leaving other existing properties alone).
putVitamin() is the method for the PUT on /vitamins/{vitaminId} and represents the replacement of an existing entity, replacing all persisted values with what is passed in, even if the fields are null/empty.
Since I created my ServiceBuilder layer and customized for these entry points, my implementations in the VitaminResourceImpl class looks pretty light:
@Override
public Vitamin postVitamin(Vitamin v) throws Exception {
PersistedVitamin pv = _persistedVitaminService.addPersistedVitamin(
v.getId(), v.getName(), v.getGroup(), v.getDescription(), _toTypeCode(v.getType()), v.getArticleId(), v.getChemicalNames(),
v.getProperties(), v.getAttributes(), v.getSymptoms(), v.getRisks(), _getServiceContext());
return _toVitamin(pv);
}
@Override
public Vitamin patchVitamin(@NotNull String vitaminId, Vitamin v) throws Exception {
PersistedVitamin pv = _persistedVitaminService.patchPersistedVitamin(vitaminId,
v.getId(), v.getName(), v.getGroup(), v.getDescription(), _toTypeCode(v.getType()), v.getArticleId(), v.getChemicalNames(),
v.getProperties(), v.getAttributes(), v.getSymptoms(), v.getRisks(), _getServiceContext());
return _toVitamin(pv);
}
@Override
public Vitamin putVitamin(@NotNull String vitaminId, Vitamin v) throws Exception {
PersistedVitamin pv = _persistedVitaminService.updatePersistedVitamin(vitaminId,
v.getId(), v.getName(), v.getGroup(), v.getDescription(), _toTypeCode(v.getType()), v.getArticleId(), v.getChemicalNames(),
v.getProperties(), v.getAttributes(), v.getSymptoms(), v.getRisks(), _getServiceContext());
return _toVitamin(pv);
}
Like I said, they are pretty light...
Since I'm going to the service layer, I need a ServiceContext. Liferay provides a com.liferay.headless.common.spi.service.context.ServiceContextUtil that has just the method I need to create my ServiceContext. It starts a context, I just need to add some additional stuff into it like the company id and the current user id. So I wrapped all of this into the _getServiceContext() method. And good news for me, in future versions of the REST Builder, I'm going to be getting some new context variables which will make getting a valid ServiceContext much easier.
My ServiceBuilder methods all use the blown out parameter passing we all know and love about ServiceBuilder. The PersistedValue instance I get back from the method calls gets passed off to _toVitamin() for conversion which is then returned.
And that's all of the simple methods to deal with. We still have to cover the getVitaminsPage() method, but before we do that we have to cover the EntityModels...
EntityModels
Earlier I discussed how Liferay supports list filtering, searching and sorting by using the search index. I also discussed how fields available for filtering or sorting must be part of an EntityModel definition for your components. Fields from the component that are not part of the EntityModel cannot be filtered nor sorted.
An additional side effect, since the EntityModel exposes those fields from the search index for filtering and sorting, those fields do not have to be connected to the Component fields.
For example, in an EntityModel definition, you could add an entry for a creatorId that would be a filter to the user id in the search index. The component definition might have the Creator field and not a creatorId field, but the creatorId can still be used in both filtering and/or sorting since it is part of the EntityModel.
So we have to build out an EntityModel, one that defines both the fields we want to support filtering on as well as the fields we want to support sorting on. We're going to be using mostly existing Liferay utilities to help put our EntityModel class together.
Here it is:
public class VitaminEntityModel implements EntityModel {
public VitaminEntityModel() {
_entityFieldsMap = Stream.of(
// chemicalNames is a string array of the chemical names of the vitamins/minerals
new CollectionEntityField(
new StringEntityField(
"chemicalNames", locale -> Field.getSortableFieldName("chemicalNames"))),
// we'll support filtering based upon user creator id.
new IntegerEntityField("creatorId", locale -> Field.USER_ID),
// sorting/filtering on name is okay too
new StringEntityField(
"name", locale -> Field.getSortableFieldName(Field.NAME)),
// as is sorting/filtering on the vitamin group
new StringEntityField(
"group", locale -> Field.getSortableFieldName("vitaminGroup")),
// and the type (vitamin, mineral, other).
new StringEntityField(
"type", locale -> Field.getSortableFieldName("vType"))
).collect(
Collectors.toMap(EntityField::getName, Function.identity())
);
}
@Override
public Map getEntityFieldsMap() {
return _entityFieldsMap;
}
private final Map _entityFieldsMap;
}
So the Field names, those come from the names I used in the PersistedVitaminModelDocumentContributor class in the service layer to add my field values.
I've included definitions for chemicalNames, Field.USER_ID, Field.NAME, vitaminGroup and vType Fields from the search index. Of the definitions, the creatorId field the filter would use, that doesn't exist as a field of the Vitamin component definition.
The other fields that are part of the Vitamin component, well I just don't feel like I need to allow for sorting or filtering on the rest. Obviously this kind of decision will normally be driven by your requirements.
Liferay saves these classes in an "odata.entity.v1_0" package in your internal package, so I have the com.dnebinger.headless.delivery.internal.odata.entity.v1_0 package where I put my file.
Now that the class is ready, we must also decorate the VitaminResourceImpl class so it correctly reports that it can serve an EntityModel.
Here are the changes you need to make:
The ResourceImpl class needs to implement the com.liferay.portal.vulcan.resource.EntityModelResource interface.
The class must implement the getEntityModel() method that returns an EntityModel instance.
And that's it. Because my VitaminEntityModel is pretty simple and not very dynamic, my implementation is like:
public class VitaminResourceImpl extends BaseVitaminResourceImpl
implements EntityModelResource {
private VitaminEntityModel _vitaminEntityModel = new VitaminEntityModel();
@Override
public EntityModel getEntityModel(MultivaluedMap multivaluedMap) throws Exception {
return _vitaminEntityModel;
}
It is important to note that this may not be a typical implementation. Liferay's component resource implementation classes have significantly more complicated and dynamic EntityModel generation, but this is due to the complexity of the entities involved (for example, StructuredContent is a mish-mash of a JournalArticle, a DDM structure and a template, and I think there may also be a kitchen sink in there too if you look hard enough).
So don't blindly copy my method and run with it. It may work in your case, but it may not. For more complicated scenarios, check out the Liferay implementations for EntityModel classes as well as the getEntityModel() methods in the component resource implementations.
Implementing getVitaminsPage()
So this is probably the most complicated method to implement. Not because it is challenging, per se. It is just dependent upon so many other things...
The Liferay list handling functionality here comes from the search index, not the database. So this requires our entities are indexed.
This is also the method that supports filter, search and sort parameters; these too require that the entity is indexed. And as we just saw, filter and sort are also dependent upon the EntityModel classes.
And finally, since it is calling out to Liferay methods, the implementation itself will seem pretty opaque and out of our control.
Here's what we end up with:
public Page getVitaminsPage(String search, Filter filter, Pagination pagination, Sort[] sorts) throws Exception {
return SearchUtil.search(
booleanQuery -> {
// does nothing, we just need the UnsafeConsumer method
},
filter, PersistedVitamin.class, search, pagination,
queryConfig -> queryConfig.setSelectedFieldNames(
Field.ENTRY_CLASS_PK),
searchContext -> searchContext.setCompanyId(contextCompany.getCompanyId()),
document -> _toVitamin(
_persistedVitaminService.getPersistedVitamin(
GetterUtil.getLong(document.get(Field.ENTRY_CLASS_PK)))),
sorts);
}
So we're using the SearchUtil.search() method which knows how to process everything...
The first argument is the UnsafeConsumer class which is basically responsible for tweaking the booleanQuery as necessary for your entities. I didn't need one here, but there are examples in the Liferay headless-delivery module. The StructuredContent's version that finds articles by site id will add the site id as a query argument. The "flatten" parameter will tweak the query to search a specific filter, those kinds of things.
The filter, search, and pagination arguments that we get from the headless layer are passed straight through; they will be applied to the boolean query to filter and search results, and pagination will make sure that we get a page worth of results.
The queryConfig is asking for just the return of the primary key values and none of the other field data. Since we don't convert from a search index Document, we will need the ServiceBuilder entity for that, so the query doesn't need to return any of the other Fields in the Documents.
The next to last argument is another UnsafeFunction which is responsible for applying the transformation from the Document to the component type; the implementation provided fetches the PersistedVitamin instance using the primary key value extracted from the Document, and that PersistedVitamin is passed through _toVitamin() to handle the final conversion.
Wrapping Up
So now we're actually done with all of the coding activities, but we're not completely done...
We want to re-run the buildREST command again. We've added methods into our VitaminResourceImpl method and we want to make sure we have the test cases ready to apply to them.
Next, we need to build and deploy our modules and clean up any deployment issues such as unresolved references and stuff. We deploy the vitamins-api and vitamins-service for the ServiceBuilder tier and the vitamins-headless-api and vitamins-headless-impl modules for the Headless tier.
When those are ready, we should drop into our headless-vitamins-test module and run all of our test cases (and if there are some that are missing, well we can recreate those too).
When all of that is ready, we might want to consider publishing our Headless API to Swaggerhub so others can consume it.
We don't want to use the Yaml file we created for REST Builder. Instead we want to point our browsers at http://localhost:8080/o/headless-vitamins/v1.0/openapi.yaml and use that file for the submission. It will have all of the necessary parts in place plus some additional components such as the PageVitamin type, etc.
Conclusion
And there we have it!
We started in part 1, creating our workspace and modules for our new Headless adventure. We also started the OpenAPI Yaml file that REST Builder would eventually use to generate code by defining our Reusable Components section with our Component type definitions.
In part 2, we completed the OpenAPI Yaml file for REST Builder by adding in our path definitions. We had a REST Builder generation failure once and covered some of the common formatting errors that can cause generation failures. We fixed those and then successfully generated code using REST Builder.
In part 3 we reviewed all of the generated code in all of the modules to see what was created for us and hinted where our modifications were going to be made.
And finally here, in part 4, we created a Service Builder layer and included resource permissions (for permission checking in the remote services) and entity indexing (to support the list filter/search/sort capabilities of Liferay's Headless infrastructure). We then flushed out our VitaminResourceImpl methods, discussed how to handle entity to Component conversions as well as the EntityModel classes needed to facilitate filters and sorts.
We wrapped it all up with testing and possibly publishing our API to Swaggerhub for everyone to enjoy.
It's been a long road, but certainly an interesting one for me. I hope you enjoyed it also.
And once again, here's the repo for the blog series: https://github.com/dnebing/vitamins [Less]
|
|
Posted
over 6 years
ago
by
David H Nebinger
Just a quick post today...
So I've been using the Target Platform like all the time now. I don't want to have to worry about versions, especially those Liferay modules that change version numbers on every fix pack...
However, I've found that
... [More]
sometimes the version numbers just aren't there. But often I only find this out after I've stripped out the version and tried a build.
My friend Greg Amerson shared a trick with me today to help figure out if I can strip the versions or not, and I wanted to share it with you...
So, assuming you've enabled the target platform in your gradle.properties file in the root of your workspace, you can issue the following command:
$ gradlew dependencyManagement | grep osgi.service.component
<-------------> 0% EXECUTING [0s]
> :modules:headless-vitamins-test:dependencyManagement
org.osgi:org.osgi.service.component.annotations 1.3.0
org.osgi:org.osgi.service.component.annotations 1.3.0
If you get a match for what you're looking for, then you don't need to specify the version.
If you don't get a match, then the version is not available in the target platform BOMs and you will need to include the version in your dependency list.
Enjoy! [Less]
|
|
Posted
over 6 years
ago
by
David H Nebinger
Introduction
In part 1 of this series, we started a project to leverage Liferay's new REST Builder tool for generating Headless APIs. We defined the Reusable Components section, the section where we define our request and response objects, namely
... [More]
the Vitamin component and a copy of Liferay's Creator component.
In part 2 of the series, we finished the OpenAPI Yaml file by defining our paths (the endpoints), then moved on to code generation where we encountered and solved some common problems. We wrapped with successfully generating code.
In this part, we're going to take a look at the generated code and how we will add in our implementation code where we need it. Let's get cracking!
Looking at the Generated Code
So we have four modules where code has been generated: headless-vitamins-api, headless-vitamins-client, headless-vitamins-impl, and headless-vitamins-test.
Although REST Builder generates code, it does not modify the build.gradle files nor the bnd.bnd files. It will be up to you to add dependencies and export packages. In the sections below I'll share the settings I used, but you'll need to come up with the set necessary for your implementation.
Let's look at each module individually...
headless-vitamins-api
The API module is similar in concept to a Service Builder API module, it contains our interface for our resource (our service), and it also has concrete POJO classes for our component types, Vitamin and Creator.
Well, they're more than just pure POJOs... Our component type classes have additional setters that will be invoked by the framework when deserializing our object. Let's take a look at one from the Creator component type:
@JsonIgnore
public void setAdditionalName(
UnsafeSupplier additionalNameUnsafeSupplier) {
try {
additionalName = additionalNameUnsafeSupplier.get();
}
catch (RuntimeException re) {
throw re;
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
Pretty tame. Since it's generated code, you don't really need to worry about these guys, but I wanted to highlight one so you wouldn't be surprised.
Our VitaminResource is the interface for our resource (aka service). It also is generated, and it comes from the paths defined in our OpenAPI Yaml file. You might notice, after invoking the REST Builder, our yaml file has new attributes added on each path for the operationId, those values match exactly the methods in our interface.
Since we have so few methods, I'll just share the interface here:
@Generated("")
public interface VitaminResource {
public Page getVitaminsPage(
String search, Filter filter, Pagination pagination, Sort[] sorts)
throws Exception;
public Vitamin postVitamin(Vitamin vitamin) throws Exception;
public void deleteVitamin(String vitaminId) throws Exception;
public Vitamin getVitamin(String vitaminId) throws Exception;
public Vitamin patchVitamin(String vitaminId, Vitamin vitamin)
throws Exception;
public Vitamin putVitamin(String vitaminId, Vitamin vitamin)
throws Exception;
public void setContextCompany(Company contextCompany);
}
Our /vitamins path, the one that returns the array of Vitamin objects? That's for the first method, the getVitaminsPage() method. We won't have a PageVitamin component declared in our own Yaml file, but in the exported Yaml file there will be one injected in there for us.
Our other methods in the resource interface match up with the other paths that are defined in the Yaml file.
I needed to add some dependencies to my build.gradle file for the API module:
dependencies {
compileOnly group: "com.fasterxml.jackson.core", name: "jackson-annotations", version: "2.9.9"
compileOnly group: "com.liferay", name: "com.liferay.petra.function"
compileOnly group: "com.liferay", name: "com.liferay.petra.string"
compileOnly group: "com.liferay", name: "com.liferay.portal.vulcan.api"
compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel"
compileOnly group: "io.swagger.core.v3", name: "swagger-annotations", version: "2.0.5"
compileOnly group: "javax.servlet", name: "javax.servlet-api"
compileOnly group: "javax.validation", name: "validation-api", version: "2.0.1.Final"
compileOnly group: "javax.ws.rs", name: "javax.ws.rs-api"
compileOnly group: "org.osgi", name: "org.osgi.annotation.versioning"
}
In order to expose my components and resource interface, I also made a small change to the bnd.bnd file:
Export-Package: com.dnebinger.headless.vitamins.dto.v1_0, \
com.dnebinger.headless.vitamins.resource.v1_0
headless-vitamins-client
The code in this module builds a Java-based client for invoking the Headless API.
The client entry point will be in the .client.resource.v1_0.Resource class. In my case, this is com.dnebinger.headless.vitamins.client.resource.v1_0.VitaminResource class.
There's a static method for each of our paths, and each method takes the same args and returns the same objects.
Behind the scenes each method will use an HttpInvoker instance to call the web service on localhost:8080 using the [email protected] and test credentials. If you are wanting to test a remote service or use different credentials, you'll need to hand-edit the Resource class to use the different values.
It's up to you to build a main class or other code to invoke the client code, but having a full client library for testing is a great start!
The generated headless-vitamins-test module relies on the headless-vitamins-client module for testing the service layer.
The headless-vitamins-client module does not have any external dependencies, but you do need to export the packages in the bnd.bnd file:
Export-Package: com.dnebinger.headless.vitamins.client.dto.v1_0, \
com.dnebinger.headless.vitamins.client.resource.v1_0
headless-vitamins-test
We're going to skip the headless-vitamins-impl module and briefly cover the headless-vitamins-test.
The generated code here provides all of the integration tests for your service modules. It leverages the client module for invoking the remote APIs.
In this module we get two classes, a BaseResourceTestCase and a ResourceTest, so I have BaseVitaminResourceTestCase and VitaminResourceTest.
The VitaminResourceTest class is where I would go to add any additional tests that I wanted to include that the Base class doesn't already implement for me. They'd be larger-scale tests to maybe take advantage of other modules, error validation when trying to add duplicate primary keys or delete an object that doesn't exist.
Basically any testing that the simple invocation of the raw resource methods individually cannot cover.
My build.gradle file for this module took a lot of additions:
dependencies {
testIntegrationCompile group: "com.fasterxml.jackson.core", name: "jackson-annotations", version: "2.9.9"
testIntegrationCompile group: "com.fasterxml.jackson.core", name: "jackson-core", version: "2.9.9"
testIntegrationCompile group: "com.fasterxml.jackson.core", name: "jackson-databind", version: "2.9.9.1"
testIntegrationCompile group: "com.liferay", name: "com.liferay.arquillian.extension.junit.bridge", version: "1.0.19"
testIntegrationCompile group: "com.liferay.portal", name: "com.liferay.portal.kernel"
testIntegrationCompile project(":modules:headless-vitamins:headless-vitamins-api")
testIntegrationCompile project(":modules:headless-vitamins:headless-vitamins-client")
testIntegrationCompile group: "com.liferay", name: "com.liferay.portal.odata.api"
testIntegrationCompile group: "com.liferay", name: "com.liferay.portal.vulcan.api"
testIntegrationCompile group: "com.liferay", name: "com.liferay.petra.function"
testIntegrationCompile group: "com.liferay", name: "com.liferay.petra.string"
testIntegrationCompile group: "javax.validation", name: "validation-api", version: "2.0.1.Final"
testIntegrationCompile group: "commons-beanutils", name: "commons-beanutils"
testIntegrationCompile group: "commons-lang", name: "commons-lang"
testIntegrationCompile group: "javax.ws.rs", name: "javax.ws.rs-api"
testIntegrationCompile group: "junit", name: "junit"
testIntegrationCompile group: "com.liferay.portal", name: "com.liferay.portal.test"
testIntegrationCompile group: "com.liferay.portal", name: "com.liferay.portal.test.integration"
}
Some of these dependencies will be defaults necessary just for the classes (i.e. junit and the liferay test modules), others will depend upon your projects (i.e. the client and api modules, perhaps other modules if you need them). You may have to go a few rounds getting the list that works for you.
My bnd.bnd file in this module did not require modification since I'm not going to export any of the classes or packages.
headless-vitamins-impl
Finally we get to the fun one. This is the module where your implementation code. The REST Builder has done a decent job of generating a lot of the starter code for us; let's take a look at what we get.
com.dnebinger.headless.vitamins.internal.graphql - Yeah, that's right, GraphQL Baby! Your headless implementation includes a GraphQL endpoint exposing your queries and mutations based upon your defined paths. Note that the GraphQL is not merely proxying a call to the REST implementation that you often see with this kind of mix; no, in this implementation GraphQL is invoking your Resource directly to handle the query and mutation changes. So just by using REST Builder, you will automatically get GraphQL too!
com.dnebinger.headless.vitamins.internal.jaxrs.application - This is where the JAX-RS Application class is. It doesn't contain anything interesting, but does register the application into Liferay's OSGi container.
com.dnebinger.headless.vitamins.internal.resource.v1_0 - This is the package where we'll be modifying code...
You'll get an OpenAPIResourceImpl.java class, this is the path that takes care of returning the OpenAPI yaml file that you would load, for instance, into Swagger Hub.
For each Resource interface you have, you'll get an abstract BaseResourceImpl base class and a concrete ResourceImpl class for you to do your work in.
So I have a BaseVitaminResourceImpl class and a VitaminResourceImpl.
If you check out a method in the base class, you'll see it is decorated like crazy with annotations for Swagger and JAX-RS. Let's check the one for the getVitaminsPage() method, the one that is on /vitamins and is used to return the array of Vitamin components:
@Override
@GET
@Operation(
description = "Retrieves the list of vitamins and minerals. Results can be paginated, filtered, searched, and sorted."
)
@Parameters(
value = {
@Parameter(in = ParameterIn.QUERY, name = "search"),
@Parameter(in = ParameterIn.QUERY, name = "filter"),
@Parameter(in = ParameterIn.QUERY, name = "page"),
@Parameter(in = ParameterIn.QUERY, name = "pageSize"),
@Parameter(in = ParameterIn.QUERY, name = "sort")
}
)
@Path("/vitamins")
@Produces({"application/json", "application/xml"})
@Tags(value = {@Tag(name = "Vitamin")})
public Page getVitaminsPage(
@Parameter(hidden = true) @QueryParam("search") String search,
@Context Filter filter, @Context Pagination pagination,
@Context Sort[] sorts)
throws Exception {
return Page.of(Collections.emptyList());
}
Like, ick, right?
Well, that's one of the advantages of what REST Builder is going to do for us. Since all of the annotations are defined in the base class, we just don't need to worry about them...
See that return statement, the one that is passing Page.of(Collections.emptyList())? So this is the stub method the base class provides; it doesn't provide a worthwhile implementation, but it does ensure that a value is returned in case we don't implement it.
So when we are ready to implement this method, we'll go into the VitaminResourceImpl class (currently empty) and add the following method:
@Override
public Page getVitaminsPage(String search, Filter filter, Pagination pagination, Sort[] sorts) throws Exception {
List vitamins = new ArrayList();
long totalVitaminsCount = ...;
// write code here, should add to the list of Vitamin objects
return Page.of(vitamins, Pagination.of(0, pagination.getPageSize()), totalVitaminsCount);
}
No Annotations! Like I said, the annotations are all in the method we're overriding so we get all of the configuration ready for us!
So unlike Service Builder generated code, you're not going to see a bunch of "This file is generated, do not modify this file" comments everywhere. You will see the @Generated("") annotation on all classes which will be [re-]generated when you run REST Builder again.
Our BaseResourceImpl class is annotated like this. It is a generated file that will be re-written every time you run REST Builder. So don't mess around with the annotations or methods or method implementations in this file, keep all of your modifications in the ResourceImpl class.
If you do need to tamper with the annotations (I wouldn't recommend it), you should be able to do this in the ResourceImpl class and they should override the annotations from the base class.
So our build.gradle file needs some dependencies added. My full file is:
buildscript {
dependencies {
classpath group: "com.liferay", name: "com.liferay.gradle.plugins.rest.builder", version: "1.0.21"
}
repositories {
maven {
url "https://repository-cdn.liferay.com/nexus/content/groups/public"
}
}
}
apply plugin: "com.liferay.portal.tools.rest.builder"
dependencies {
compileOnly group: "com.fasterxml.jackson.core", name: "jackson-annotations", version: "2.9.9"
compileOnly group: "com.liferay", name: "com.liferay.adaptive.media.api"
compileOnly group: "com.liferay", name: "com.liferay.adaptive.media.image.api"
compileOnly group: "com.liferay", name: "com.liferay.headless.common.spi"
compileOnly group: "com.liferay", name: "com.liferay.headless.delivery.api"
compileOnly group: "com.liferay", name: "com.liferay.osgi.service.tracker.collections"
compileOnly group: "com.liferay", name: "com.liferay.petra.function"
compileOnly group: "com.liferay", name: "com.liferay.petra.string"
compileOnly group: "com.liferay", name: "com.liferay.portal.odata.api"
compileOnly group: "com.liferay", name: "com.liferay.portal.vulcan.api"
compileOnly group: "com.liferay", name: "com.liferay.segments.api"
compileOnly group: "com.liferay.portal", name: "com.liferay.portal.impl"
compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel"
compileOnly group: "io.swagger.core.v3", name: "swagger-annotations", version: "2.0.5"
compileOnly group: "javax.portlet", name: "portlet-api"
compileOnly group: "javax.servlet", name: "javax.servlet-api"
compileOnly group: "javax.validation", name: "validation-api", version: "2.0.1.Final"
compileOnly group: "javax.ws.rs", name: "javax.ws.rs-api"
compileOnly group: "org.osgi", name: "org.osgi.service.component", version: "1.3.0"
compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations"
compileOnly group: "org.osgi", name: "org.osgi.core"
compileOnly project(":modules:headless-vitamins:headless-vitamins-api")
}
All of the packages are internal, so I don't need anything in my bnd.bnd file.
Conclusion
Why are we stopping? We're just getting to the point where we can start building out the implementations!
Well, it's a good point to stop...
In part 1 we created the project and started our OpenAPI Yaml by defining our Reusable components.
In part 2 we added all of the path definitions for our OpenAPI service and used REST Builder to generate the code.
Here in part 3 we reviewed all of the code that was generated for us, including touching on where we make code modifications and how we won't have to worry about the annotations in our implementation code.
In the final part for this series, we're going to add in a Service Builder module to the project for data storage, then we're going to implement all of our resource methods to take advantage of the ServiceBuilder code.
See you there!
https://github.com/dnebing/vitamins [Less]
|