|
Posted
about 13 years
ago
by
cathal connolly
DotNetNuke already contains a lot of useful logging information that can help with problems on your site – these are all covered on the debugging category on the Wiki, but the two most common ones are the event viewer and log4net . However both of
... [More]
them have some draw-backs. Whilst the event viewer log’s both exceptions and events, allowing site administrators to see changes on the site, it primary use case is for successfully installed websites and it isn’t as useful for sites failing to install or upgrade (though users can view the EventLog table in the upgrade scenario to see if it helps provide useful information) Log4Net is much better placed to deal with install/upgrade issues, but it’s default setting is “Error” so whilst that will catch any failing issues during an install/upgrade errors, its only logging failures and not success. Whilst this can be changed to “All”, it’s rare a user will think of doing this before an install so by the time you change it you’ll likely lost the useful information. In addition, “All” is very verbose as it logs a huge amount of information. To address these concerns and help user diagnose errors that occur during installation, upgrade and extension installation (and any changes made in configuration files via the XMLMerge functionality), we’ve added a new “Installer” log in DotNetNuke 7.0. This log is enabled by default, and cannot be disabled. In addition, it logs both successful and unsuccessful operations (i.e. information and errors), as well as (where sensible) logging the beginning and ending of operations. As this logging happens in much more focused areas, we’ve also added additional granular logging. For instance during initial installation we test permissions by creating a test file and folder – previously if this failed we could only alert you to the permissions being wrong, but now the logging is at a more granular level, so you could see the exact step that failed. In this example rather than confirming whether or not file permissions as a whole are correct, users can now see the 4 individual checks – this would help with issues where for instance read/write permissions were set, but modify was not. 11/8/2012 2:43:20 PM [INFO] DotNetNuke.Services.Upgrade.Internals.Steps.BaseInstallationStep Folder Creation Check 11/8/2012 2:43:20 PM [INFO] DotNetNuke.Services.Upgrade.Internals.Steps.BaseInstallationStep File Creation Check 11/8/2012 2:43:20 PM [INFO] DotNetNuke.Services.Upgrade.Internals.Steps.BaseInstallationStep File Deletion Check 11/8/2012 2:43:20 PM [INFO] DotNetNuke.Services.Upgrade.Internals.Steps.BaseInstallationStep Folder Deletion Check The “Installer” log files can be found in the usual location of portals/_default/logs , and will have a name similar to InstallerLog2012118.resources (the .resources extension is used to ensure no-one can download your log files even if they work out the URL). In addition, users can log in as host (superuser) and go to host->host settings, click the “Logs” tab and view the contents of these (and other) logs This has been a popular request on the Community Exchange , and we hope that you will find it useful.More ... [Less]
|
|
Posted
about 13 years
ago
by
Shaun Walker
The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the
... [More]
DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience.
It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0. This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight.
With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ).
Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0. As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release.
Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th.
On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available.
I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.More ... [Less]
|
|
Posted
about 13 years
ago
by
cathal connolly
The 6.2.5 CE, PE and EE versions of DotNetNuke have been released.
These releases fix four security issues. Three of these are rated as “low” and one is rated as a "critical" security issue.
The bulletins can be read at
Failure to encode
... [More]
module title – “Low” issue
List function contains a cross-site scripting issue – “Low” issue
Member directory results fail to apply extended visibility correctly – “Low” issue
Profile avatars fail to filter invalid values – “Critical” issue
Acknowledgements
We would like to thank the following for responsibly disclosing issues to our security team, and allowing us the time to resolve them.
Sunil Yadav via Secunia SVCRP
Chris Hammond
Rutger Buijzen (DotControl) http://www.dotcontrol.nl
As always we recommend you upgrade as soon as possible, particularly as this release contains a “Critical” fix.
If you're new to upgrading I recommend you read the "detailed installation guide" found here , and the excellent blog entry from Erik here . For users who are running 4.6.2 or above, I recommend you read this blog entry which details how to use the upgrade package to easily merge any web.config changes.
You can read more details about these issues and our security policy hereMore ... [Less]
|
|
Posted
about 13 years
ago
by
Ernst Peter Tamminga
This version, 06.00.04, is
a release that should resolve all outstanding issues and includes a small
number of changes. Most noteworthy is a small fix to the SQL provider scripts to
make them compatible with the last version of SQL Azure. We
... [More]
encourage you all to give this new version a test before updating
your production environment. And be sure to make a backup before installing this
new release. Better be safe than sorry! Events 06.00.04 can be
downloaded
from CodePlex.
Release notes DNN Events 06.00.04
Events 06.00.04 will work for any DNN version 6.1.2 and up.
Full details on the changes can be found in great detail at http://dnnevents.codeplex.com/workitem/list/basic.
BUG FIXES
Fixed problem where it isn't possible to edit settings when more than one
module on a page.
Fixed problem where token replace was occuring for PayPal payment notifications.
Fixed problem installing on Microsoft Azure.
CHANGES
Added some additional tokens to support formatting: [event:subcalendarnameclean],
[event:eventid], [event:categoryname], [event:durationdays], [event:durationdayslabel],
[IFNOTMULTIDAY], [IFMULTIDAY].
Can you assist us?
Is there a volunteer #dotnetnuke developer around that can help us (the
Events team) to implement Telerik RadScheduler in the DNN Events module? This is
a serious improvement for the module, and we can use some assistance in this. Your reward:
eternal fame in the DNN community and my offer for a free drink when we meet in
person.More ... [Less]
|
|
Posted
about 13 years
ago
by
cathal connolly
DotNetNuke has always shipped with Public set as it's default portal (site) registration (you can read more about the registration options here). This was a decision that was inherited from the IBuySpy portal codebase that early versions of
... [More]
DotNetNuke utilized, and whilst it has advantages in enabling users to sign up immediately after installation, there are some drawbacks. We analyzed the last 18 months of security issues and found that approximately 44% of them required the potential hacker to have a valid, authorized user account to start off with. As the install default was “Public” for registration, gaining a valid user was a trivial step. As many sites ultimately aren't intended for public users (e.g. a personal site may have only one user or a business site may use active directory integration), the decision was made to change the site registration type to "Private" in 7.0.0 to add an additional layer of defense-in-depth. Now, when you perform an installation when you click registration you will see the note on the screen that informs the user of this: For a user to gain portal access now, the site administrator (or host) has to go to admin->user accounts and authorize that user. Changing site registration In some cases sites may prefer the “old” default. If your site wants this then you can change this prior to installation by amending the relevant template file in portals/_default e.g. if you plan to install with the blank template edit Blank Website.template and change the useregistration node e.g. for private it is set as follows: 1 You can change this value to one of the supported values e.g. 0 – no registration 1 – private registration 2 – public registration (the “old” default) 3 –verified registration Note: only the English (en-US) templates ship with the product, other templates are downloaded on demand via the update service during installation, so you do not have the option to alter this value during installation. Instead you will have to change it via the UI after installation. If you’ve already installed the site and want to change the setting, log in as an admin or host and go to admin->site settings, click on the user account settings tab and change it via the user registration radio button. More ... [Less]
|
|
Posted
about 13 years
ago
by
cathal connolly
Background
With the release of DotNetNuke 7.0.0 we’re moving the service framework to use WebAPI, rather than the asp.net MVC version that was released in 6.2.0. Scott’s already covered converting any MVC services you may have created here , as well
... [More]
as providing an excellent blog on Getting Started with Services Framework WebAPI edition here , but we wanted to provide some further detail and some tips for developers interested in creating services for 7.0.0 and above.
There are many good examples already of full service implementations , including one from Scott’s blog and of course all the core services themselves which have been updated to use WebAPI and can be viewed in the latest source code, so this blog will only contain the server code implementation and not the caller (whilst you can call it via a server HTTPWebRequest or MS Ajax, I recommend following the examples in the core where we use jQuery ajax and KnockoutJS )
Service Implementation
To create a service framework in DotNetNuke, you need a minimum of two classes – one that controls the routing (i.e. processes the request and redirects to the controller) and one that exposes the service framework methods. These classes can be contained in either a web-site project (WSP) or web application project (WAP) – the recommendation is to use the WAP model.
If using the WAP model, open vs.net and create a new class library project with references to all the appropriate libraries.
In Visual Studio create a new class library project for .Net Framework 4.0
Add references to the following libraries in your installation (Browse to the /bin folder of you install using the Add Reference dialog box)
DotNetNuke.dll
DotNetNuke.Web.dll
System.Net.Http.dll
System.Net.Http.Formatting.dll
System.Web.Http.dll
Add references to the following standard .Net libraries (Use the .Net tab of the Add Reference dialog box)
System.Web
Set the output path of your project to the /bin folder of your 7.0 installation
Delete the default Class1.cs (or class1.vb if using vb.net) file
Add a new file to store the route details e.g. create a file called RouteMapper.cs and add the following code (this blog has further detail on the parameters used in the route definition)
using DotNetNuke.Web.Api;
namespace MyServices
{
public class RouteMapper : IServiceRouteMapper
{
public void RegisterRoutes(IMapRoute mapRouteManager)
{
mapRouteManager.MapHttpRoute("MyServices", "default", "{controller}/{action}", new[] { "MyServices" });
}
}
}
Now we need to create a class that contains our service framework implementation. Create another class, in my case one called RoleSubscriptionController.cs and add the following code:
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
using DotNetNuke.Entities.Portals;
using DotNetNuke.Entities.Users;
using DotNetNuke.Security.Roles;
using DotNetNuke.Web.Api;
namespace MyServices
{
public class RoleSubscriptionController : DnnApiController
{
[DnnAuthorize]
[HttpGet]
public HttpResponseMessage GetPublicRoles()
{
PortalSettings ps = PortalController.GetCurrentPortalSettings();
var rc = new RoleController();
ArrayList lstRoles = rc.GetPortalRoles(ps.PortalId);
IList results = (from RoleInfo objRole in lstRoles
where objRole.IsPublic
select new SubscribedRoles
{
RoleId = objRole.RoleID,
RoleName = objRole.RoleName,
Subscribed =
UserController.GetCurrentUserInfo().IsInRole(
objRole.RoleName)
}).ToList();
return Request.CreateResponse(HttpStatusCode.OK, results.OrderBy(sr => sr.RoleName));
}
[HttpPost]
[ValidateAntiForgeryToken]
[DnnAuthorize]
public HttpResponseMessage SetRole(RoleDTO dto)
{
PortalSettings ps = PortalController.GetCurrentPortalSettings();
var rc = new RoleController();
rc.UpdateUserRole(ps.PortalId, UserController.GetCurrentUserInfo().UserID,
dto.RoleID, dto.Subscribed);
return Request.CreateResponse(HttpStatusCode.OK);
}
#region Nested type: RoleDTO
public class RoleDTO
{
public int RoleID { get; set; }
public bool Subscribed { get; set; }
}
#endregion
}
In this example I also use another class (SubscriberRoles) to constrain the data I pass back i.e. there is not need to pass full RoleInfo instances.
using System;
namespace MyServices
{
public class SubscribedRoles
{
public int RoleId;
public string RoleName;
public Boolean Subscribed;
}
}
Note: the main thing to take away from this work is the inheritance – in the case of our routing class we ensure we inherit from IServiceRouteMapper, and in the case of our service itself it inherits from DnnApiController. This inheritance allows DotNetNuke to find and register the supported routes and to route requests to the correct service framework methods. Now we’ve got some code, lets examine it in more detail.
Checking route registration
Before you get too far along it’s wise to check that your routing is working as expected. Looking at the first 3 parameters in the mapped route we can see that the request should be in the form http://dnndev/DesktopModules/MyServices/API/RoleSubScription/GetPublicRoles , so if we enter that into the browser we should see a successful request with some data returned (in our case just the public roles that the current user is a member of).
If we examine the request in a http proxy such as Fiddler we’ll also see it returned with a HTTP status of 200, indicating a successful request. If the service doesn’t return data but instead shows up as a 404, we have an issue. If the method looks fine (e.g. it’s a HTTPGet so will work with URL based requests) and the namespaces appear correct, it’s a good idea to check and see what route is being registered in DotNetNuke. To do so, you can increase the logging level in log4net to “All” to capture additional diagnostics. Once that’s increased, go to Portals\_default\Logs and open todays log file and search for “Mapping route” – you should find the route details being logged like below:
2012-11-06 17:27:33,623 [DNN-PC25][Thread:22][TRACE] DotNetNuke.Web.Api.Internal.ServicesRoutingManager - Mapping route: MyServices-default-0 @ DesktopModules/MyServices/API/{controller}/{action}
If you don’t see the route registered then you’ve probably forgot to set the inheritance correctly, the project has errors, or is not set to deploy the dll to the bin folder, so check again. Otherwise you may have made a typo in your URL (either the “DesktopModules/MyServices” portion or the name of the controller or action, so check those again – remember that the controller is simply the name of your controller file from above with the “Controller” part dropped, so in our example it’s “RoleSubScription”)
Note: In the fall update Microsoft are releasing some support for WebAPI tracing that you’ll be able to add to your projects and I’m sure we’ll investigate adding it to the core to make the debugging experience better.
Using the correct verbs
In our example, our two public methods use different verbs. GetPublicRoles uses the HTTPGet attribute and SetRole uses HTTPPost. This is deliberate as different types of action should use different verbs.Whilst it may be tempting to use simple GET’s for all your services as they’re easier to test by using querystring parameters, this is an unsafe approach. The best practice guideline is to only use GET for a request that returns data, but to use the POST verb for a request that changes data.
The reason this is important is that a GET request can be invoked very simply via the URL or via any HTML object that supports external requests e.g. a html tag such as will invoke a GET request. In the same way that one site can “leech” images from another by setting the SRC attribute to a different URL from the website, service requests that support GET can also be invoked from a different site.
The danger with this is if the service request implementation changes data, then one site can affect data on another site. Even for service requests that are not open to anonymous users, these can still be executed as your browser will automatically send the relevant site cookies when making the request.
In security terms this is called a cross-site request forgery . One famous example of this came with an early version of the Netflix API where GET requests could perform various actions which changed data. http://seclists.org/fulldisclosure/2006/Oct/316 covers this in more detail, but one of the examples of this exploit was creating an image tag that would add pornographic videos to your Netflix queue if you viewed that page – obviously not an ideal implementation and a nasty surprise for the users.
Note: whilst GET and POST are the two most commonly used verbs there are others, particularly if you are looking to use a RESTful approach. Typically a REST based service implementation will use the following verbs:
GET - Used when a browser is requesting data from the server
PUT - Typically used when updating an item
POST - Typically used when creating a new item
DELETE - Used when deleting an item
Data transfer objects
In our example above, you’ll notice that the second class (the service implementation) has another class imbedded within it (RoleDTO). This is a change in WebAPI from the previous ASP.NET MVC approach. ASP.NET MVC did not apply strict behaviors to parameters passed to a service, which allowed for POST requests to contain data in the querystring. Obviously parameters that change data should not be contained in the URL as this makes it much easier to steal or replay the actions.
One of WebAPI’s changes is to enforce parameter binding – by taking this approach the HTTP request is converted into a .net type (in our example RoleDTO) which can then be used without messy casting and provides for a better contract. This is often referred to as a “model” class, particularly in MVC, and another common name for it is a data-transfer object (or DTO) . By convention in the core we prepend these classes with DTO which indicates its purpose. You can read more on the details of parameter binding here.
Don’t pass unnecessary parameters
In our example code we are changing the state of the users roles for the portal. As such it would seem sensible to have values such as the userID and portalID, as we will need those to apply the actions, as well as the roleID and the current state of the role. However this approach is wrong, as it is not difficult for a user to create a request for themselves and capture that request in a http proxy such as Fiddler , before amending the values (e.g. I change the userID value passed to the service request to a different one such as “1” and update the superuser account)
Whilst it’s entirely possible to validate the values within the method implementation (and in cases where you’re creating a function where one user can invoke actions for another user), in general you want to not pass those parameters. Instead you should retrieve the values from the DotNetNuke API – in our case we get the current userID via UserController.GetCurrentUserInfo().UserID and the current portalID via PortalController.GetCurrentPortalSettings().PortalID.
Method authorization
In both ASP.NET MVC and WebAPI, service methods are open by default. This means that any user (including anonymous users) can call them. In DotNetNuke we’ve taken a more pessimistic approach and assumed that only Host (superuser) users can call a method unless otherwise directed (i.e. all methods have the equivalent of the RequireHost attribute). As our services are designed to be used by users or a portal, we need to indicate this – in our example we have done so by using the DnnAuthorize attribute.
Note: you can of course apply this at the class level, but in cases where your service has many methods you may want to apply it at the method level in each case as it’s easier to assess the authorization at a glance without having to scroll up to the top of the class file.
What’s ValidateAntiForgeryToken?
When working with service requests, it’s important to realize that some of the automatic security you get from asp.net Webforms is not in place. In Webforms if I have a page with a server control (such as a LinkButton) to invoke an action, when the page loads on a postback a number of pieces of validation occur – asp.net verifies if that button is able to invoke that event as well as verifying if any data was tampered with, before executing the action. This ensures that the request came from the page itself and not from another site attempting to fake the action, and that the data is valid (e.g. if selecting a value from a dropdownlist, that the value was there when the list was rendered and not “hacked-in” afterwards)
When invoking service requests via client script (such as jQuery Ajax), these automatic verifications don’t happen, so additional care must be taken. To give an example, imagine a potential hacker creates a page on their site (hackersite.com) and manages to convince you to visit their site and click on a button (you may not even know this occurs in the case of a ClickJacking attack). Their site may contain a form that posts data to your site via the action parameter e.g.
[Less]
|
|
Posted
about 13 years
ago
by
Scott Willhite
I cannot think of a more fundamental expression of community than voting.
In democracies around the world we have the hard fought right to help choose our leadership; to influence the policies of our nations; to empower those who govern. In a
... [More]
world full of Facebook and YouTube and Angry Birds, it is often easy to forget that these “rights” we have become accustomed to are not universally enjoyed by all… that they have come with a cost… and that our failure to exercise them exhibits not only a passive indifference, but an active disregard for the staggering price of this simple freedom.
In the United States today Americans will elect a President and Vice President. Many Senators and Congressmen will be selected to represent the interests of states. Sheriffs, judges and school board members will be chosen; port authority representatives, public defenders and a myriad of county officials. Every state, county and municipality will make decisions on tax levies, zoning, state constitutional amendments and a host of other issues of local importance. Approximately 230 million Americans will have the right to participate in these choices… about 130 million will.
We all have dreams. While the American Declaration of Independence asserts our rights to “life, liberty and the pursuit of happiness”, it does not guarantee them. It cannot. And we may “hold certain truths to be self evident”, but what makes them so is our active community decision and support of these principals.
I may not agree with your choices and you may not agree with mine. But the one thing we should all agree on is the right and privilege of free people to voice their thoughts and to express their choice of leadership. The men who penned the Declaration of Independence and the Constitution did not all agree on the issues… but they understood the importance of standing together, as community, in this. The declaration ends with these words: “And for the support of this Declaration, with a firm reliance on the protection of Divine Providence, we mutually pledge to each other our Lives, our Fortunes, and our sacred Honor.”
At DotNetNuke we have a deep appreciation for this kind of commitment and community; the kind that recognizes that when we band together and serve one another… that everyone is served. In our community we call this the abundance mentality.
If you have not already done so, we at DotNetNuke offer you a sincere encouragement to join your neighbors; to honor generations of heroes that have sacrificed for you; and to contribute to your community in a meaningful way… to vote.More ... [Less]
|
|
Posted
about 13 years
ago
by
Mitchel Sellers
One of the biggest concerns for developers these days is how to increase the performance of delivered applications. User expectations have changed over the past few years and the expectations are very, very high in regards to page load times and how
... [More]
quickly information should be returned to the users. Thankfully, as part of DotNetNuke there are a number of different features that can help applications work quickly. Some of these features are known by everyone, things such as the DotNetNuke performance settings under "Host Settings", Cache Time settings on a module by module basis, and if on Professional Edition the built in page Output Cache Provider. However, one often overlooked API that is helpful for developers is the DataCache API. In this post I'll do a deep dive into why this API is so helpful and some scenarios where leveraging this API it can reduce page load times and system resources needed for each page request.
What is the DataCache
Before I get into the specifics of how to use the DataCache API I want to give a bit of detail on what exactly DataCache is. If you are an experienced ASP.NET developer you should be familiar with the ASP.NET Application object, a cache repository for storing information at the application level. Items stored in application are stored using code like the following. Application["MyKey"] = MyObject; This shouldn't be anything new to you, but within DotNetNuke, just like "SessionState" it is a recommended practice to not utilize this object due to the fact that Application is not aware of the DotNetNuke caching API's or Web Farm configurations. Enter the DataCache API.
DotNetNuke's DataCache object comes in to fill the gap, used at the base level in a very similar fasion it allows you to store any object that you desire using the DotNetNuke cache provider. If the site is configured to use File caching your content is stored in a file, if Memory it wil be stored in memory. With this you can rest assured that your cached objects will work just like the rest of the DotNetNuke infrastructure.
What do I Cache?
So great, we know we can cache objects, but what do we cache? This is the age old question and something that I cannot give a 100% answer here that will be a solution for everyone reading this. The key here is that we can store ANY objects necessary. A few examples of diffent cache strategies can be illustrated by discussing what I do with two of our most popular open source modules. You can cache individual objects, and implement this caching seamlessly in your controller class. We are doing this for the our custom settings for the "Expandable Text/HTML" module. The advantage here is that if you are calling ExpandableTextController.GetSettings(12) with cache checking code inside the "GetSettings" method we can be sure that we only hit the database if the object isn't in cache already. This helps reduce database hits for commonly used objects.
In addition to doing this you can also cache generated content. Lets assume that we have a module that grabs a number of objects from the database and then does a bunch of processing and eventually loads the content to an ASP.NET Literal control on the UI. Sure, we could cache the database objects to prevent the database call, but we could also think about storing the fully generated content. Doing this, even with a 15 minute cache time can reduce CPU usage, Memory usage, and page load times for your custom modules.
Using the DataCache Object
When it comes to using the data cache object there are a number of situations where a similar pattern can be used to help gain access to the Cache and store items. To help take away some of the duplicate code that can occur I often include a "CacheHelper" object that helps greatly. Looking at the code snippet below you can see what we use on all projects.
1: public static bool CacheExists(string key)
2: {
3: return DataCache.GetCache(key) != null;
4: }
5:
6: public static void SetCache(T toSet, string key)
7: {
8: DataCache.SetCache(key, toSet);
9: }
10:
11: public static T GetItemFromCache(string key)
12: {
13: return (T) DataCache.GetCache(key);
14: }
This makes the process this much easier for you as a developer you don't need to perform casting along the way and you can use very simple patterns to get objects from cache. For example I can do something like this: var myObject = CacheHelper.CacheExists("MyItem") ? CacheHelper.GetCache("MyItem") : new MyType();. As you can see one line process to get my object, or a new object if the item isn't contained within the cache. You can easily expand on this to help with your specific application.
Conclusion
Application performance is something that as web developers we need to be working on every day. Using DotNetNuke's built in cache system gives us a helpful tool to improve performance, and also ensures that applications we develop can easily scale to Web Farm environments. Please feel free to share comments/questions below.
This post has been cross-posted to my personal blog.More ... [Less]
|
|
Posted
about 13 years
ago
by
Benjamin Hermann
Seriously, we like to create fancy stuff with DotNetNuke. How we normally do that? We would lock ourselves in a room together for a couple of hours and simply put our heads together. Classy hackathon, hu? (usually we see results like the User
... [More]
Directory Module: Get it .. or other stuff.
This time: Nope! We take DotNetNuke to the NYC Marathon; in face of Sandy. So what: Watch out for the fit runners from ITM America! Keep your fingers crossed.More ... [Less]
|
|
Posted
about 13 years
ago
by
Cuong Dang
I've used quite a few image replacement techniques in web design in the past to create better typography for the web. But recently I've ran into a technique improved by Scott Kellum that proved to be effective as well as enhancing the performance of
... [More]
the site.
If you're a web designer, you've probably heard of Fahrner Image Replacement technique. It's essentially using CSS text-indent property and set it to a very larger negative number such as -9999px so the text isn't visible to users.
The technique is known for having a performance drawback since the browser has to draw the screen out to the measurement defined in the CSS. Jeff Zeldman recently published an improved technique in his post about this fix based on Scott Kellum's refactor code as follows:
.hide-text {
text-indent: 100%;
white-space: nowrap;
overflow: hidden;
}
So use this in your next web design project to eliminate performance drawback in the CSS.
Wanna learn more about different CSS image replacement techniques?
Chris Coyer at CSS-Tricks published a pretty good list here.More ... [Less]
|