17
I Use This!
Inactive

News

Analyzed about 5 hours ago. based on code collected 1 day ago.
Posted over 14 years ago
For the past couple of months the number of contributions and modifications to Nancy have really grown in a nice and organic way. During that time it’s started to be a real task to take care of all outstanding work, maintain the community ... [More] presence, work on documentation and all other things that comes with an open-source project. For that reason I have decided that the Nancy team needed to expand and there was one candidate that stood out in the crowed. He has contributes more awesome features to Nancy than I can recall, he’s the one person I have an ongoing dialogue with about the progression and vision of the project and I see no better person to gain commit access to the main repository. I am, of course, talking about the lean mean British “I-code-better-on-beer-and-wine” machine, Steven Robbins a.k.a @grumpydev, author of TinyIoC among things. He also coined the term “super-duper-happy-path” that we use as a guiding star on each new feature we design for Nancy. Over the next couple of weeks are going to start ironing out all the details for turning the Nancy repository into an organization and moving things out of the core repository, such as the 3rd party dependencies (they will still be around, but not in the main solution). So, welcome aboard Grumpy and thank you for all your hard and amazing work on Nancy!   Permalink | Leave a comment  » [Less]
Posted over 14 years ago
Right from very first day that I announced Nancy to the world, the pace that things have moved forward has been nothing short of amazing. Nancy was always meant to be developed right out there with the community and let me tell you that I ... [More] have not been disappointed, the community answered the call. Pull request nr. 116 was sent in the other day and over 26 people (with a couple of more having pending requests) have contributed code to Nancy – some of them on multiple occasions. Since then Nancy has gained an impressive feature lists such as bootstrapping capabilities for all the major Inversion of Control containers, view engine support (spark, razor, ndjango and a Nancy specific engine), hosts for asp.net, WCF, self-hosting and even one of the (if not THE) first OWIN compatible host, cookies, sessions, embedded views, pre- and post-request pipeline, security (authentication and authorization) and many many more. The list keeps on growing. There have also been several individual and companies that have started writing applications on Nancy, ranging from simple proof of concept applications to actually taking Nancy into a production environment is software that’s sold to customers. We’ve also gotten to see Nancy run on mac and Linux thanks to mono support and we think that’s just awesome! So what’s the fuzz all about?! The goal of Nancy is to provide a no fuzz, low ceremony framework for building web applications. One of the key concepts that’s applied when working on Nancy is that everything should have a “super-duper-happy-path” implementation, you shouldn’t have to jump though hops to write your websites there should be a sensible default for everything. Simplicity is key, but not at the expense of power. At first glance you wouldn’t know it, but pretty much everything in Nancy is customizable. It’s intentionally been designed to stay out of your way, but should you find yourself in need to change a specific behavior Nancy will make it as frictionless as could be. Right from the get go, Nancy was built to not rely on a specific environment to be able to run. We built the concept of host adapters and they site right in between Nancy and what ever environment she might run on. Out of the box we currently supply hosts for running on top of ASP.NET, WCF, OWIN and a self-host (built on httplistner), but the list is sure to expand and writing adapters is easy. There is no dependency, what so ever, on System.Web from the Nancy core assembly so you can, without any problem, embed Nancy in your applications and so on. ReST-based end-point in your application? Sure why not?! Bla, bla, bla – show me the codez! Alrighty then. To give you an idea of how it is to work with Nancy, we will be building a simple Hello World (surprise!) Nancy web application. The application will be built to run on ASP.NET, but it could just as easily have been running on any of the others hosts (just swap it out, no code changes needed). To get started create a new ASP.NET Empty Web Application project (don’t worry, we have our own project template but let’s skip those and get right to the fun stuff). Once you’ve created the application it’s time to grab Nancy. You could visit out repository to download the source and build the binaries, or you could choose the easy way; use NuGet to grab the bits we need. We are going to go with the NuGet packages, so grab the Nancy.Hosting.Aspnet nuget. Not only will this install the adapter required to run Nancy on ASP.NET, but it will also register the adapter in your web.config and grab the Nancy core nuget. That’s it. You have the foundation of a Nancy application running on top of ASP.NET. Now you need contents. So let’s add a module to our project (modules can be added anywhere in your project, Nancy will find them for you). 1 2 3 4 5 6 7 8 9 public class MainModule : NancyModule{    public MainModule()    {        Get["/"] = parameters => {            return "Hello World";        };    }} What you are looking at is a module that responds to a GET request to the root path of your application. When an incoming requests matches those criterion, Nancy will respond with the text “Hello World”. Run the application and verify that I’m not kidding you – it really is as simple as that! That’s all will show you in this post. In following posts I will be sure to take you on a journey in the world of Nancy and show you things like POST, PUT, DELETE and HEAD requests, injecting dependencies into modules, using response formatters, grabbing parameters from the requested route, complex route syntax, view engines, model binding, before/after request handling (both on request and module level) and much, much more. Nancy on the web. If you want to talk about Nancy you can find us on Google Groups or on Twitter using the #NancyFx hashtag. You can also reach me on twitter @thecodejunkie Permalink | Leave a comment  » [Less]
Posted almost 15 years ago
Two months ago, on the day, I first announced Nancy here on Elegant Code as my new open-source project. I could never have anticipated the chain of events that would take place during the following two months. My tweet about the blog post got ... [More] re-tweeted more times than I can remember and the post filled up with, mostly, awesome feedback. It took about and week and then Nancy got the first pull request on her github account and from there it started to build up momentum quite fast. At the time of this writing there’s been 53 pull requests, by about 20 different people, for all kinds of features, bug fixes, custom hosts… you name it. Not bad, eh? Not only that, but Nancy has managed to pull together a nice little community, over at Google Groups, where the future of Nancy is being discussed every day. She’s also getting some attention on Twitter and we’re trying to gather it all under the #NancyFx hashtag and seems like every day there is a new face popping up. In the initial commit there were a couple of abstractions and light extension points in place for things like custom hosts, view engines and response formatters. These turned out to be a very smart move, because the community really embraced and ran with them. Let me recap some of the things that has happened since day zero View engines; there are currently support for Spark, Razor, NDjango and Static templates. We did have support for NHaml for a while, but that community seems to have gone into hibernation, so the decision was made to pull it and not waste dev cycles on it until there was a demand Inversion of Control integration; this has been one of the big one. Right from the very first beginning I wanted Nancy to be a self composing framework, that is have a small internal container that glued the framework together at runtime. I also wanted the possibility to register module level dependencies. It took a while and a couple of iterations, but we finally settled on a design and made TinyIoC the internal container. It was very important that it was a transparent experience to the end user and unless you have a need for it, you never know it’s there. Not only that, but thanks to community contributions we’ve managed to create hooks (known as bootstrappers in Nancy) for all of the major players such as StructureMap, Autofac, Unity, Ninject and Windsor. Using one of these container with Nancy is a very easy thing to do and only adds one more file to your project Response formatters; one of the powerful features in Nancy is the response model and how it lets you return different kinds of things and leverages implicit cast operators to hide the complexity behind it. Thanks to the awesome Nancy community we now have the capability to return JSON, XML, images and perform redirect responses. It’s super easy to write a response extension and hook it into Nancy, so if you have any ideas…. Bug fixes; yeah I know, it’s shocking, but it’s still true. Every now and then someone finds a bug but, more importantly, most of the time the same person is part of contributing a patch to resolve it! Hosts; Nancy has been designed with the idea of being able to run in multiple environments and shipped with a host to run on top of ASP.NET. Right now we have additional hosts for running Nancy on a WCF host and on a stand alone host. There are more of them on the way HEAD requests; the first release of Nancy supported GET, POST, PUT and DELETE requests, but thanks to a clever little contributions she now also serves up HEAD requests Cookies; not he ones you can eat…. Cookie based sessions; I think this is also self describing.. oh… yeah they are encrypted in case you were wondering! Mono; we’ve started to seriously look at getting Nancy to run on top of the up coming mono 2.10 release (we need the improvements they’ve made to the dynamic keyword) and have already managed to run the sample application on both Linux and Mac OSX. Moving forward mono is going to be a supported and equally priorities platform to support Visual Studio templates; these are still work in progress but right now I have the ability to new up a new Nancy project, based on the html5boilerplate … and we have a bare bone template in the making Buildscript; a couple of days a go we added what every self respecting open-source project needs; a build script. We choose to use Rake (that’s a ruby powered format, for those of you that’s never seen it before) and make use of the excellent Albacore gem, but the awesome Derick Bailey I’ve probably forgotten a bunch of things, there’s been so much going on that I can’t remember it all without looking at the commit history! That said, there are still things we want to put in place and there are already extensive discussions on the user group about them Security Content negotiation OWIN hosting (we’re keeping track of the OWIN 1.0 specification) Self-hosting (Benjamin van der Veen, if you are reading this – maybe Kayak can be a candidate!) The one thing that we can’t seem to pull of Is to find a designer to design a proper logo for the framework! We are in desperate need to get an awesome logo that we can put everywhere and start creating a website. So if you know any good designers that wouldn’t mind putting in some time in designing an awesome logo for an open-source project (read pro bono) and please tell them about us and about Nancy! A while back I created a post about the Nancy logo on our user group. It contains some information on the philosophy and goals for a logo for Nancy. I can’t stress enough how much I would appreciate if you took this and shared it with your designer friends! As always, if you want to ping me either drop me a comment right here on the blog or find me on on Twitter account @TheCodeJunkie … I can’t help to wonder what Nancy will be like in another two months! Permalink | Leave a comment  » [Less]
Posted almost 15 years ago
Nearly one and a half years ago I joined the MefContrib as a contributor. Soon after I was given ownership of the projects by the guy that set it up, Bill Kratochvil. Back then MEF was still in its early preview bits and most of the stuff that ... [More] existed in the MefContrib community were small sample applications on how to use MEF with ASP.NET and so on. A lot has happened since. I am very happy to be able to announce the release of MefContrib 1.0.0.0, we tagged it a couple of days ago in our repository and you can download it from either our Github repository or at the download section of our website mefcontrib.com (we you also can get builds of the latest check-ins). A good resource for MefContrib related information is our twitter account @MefContrib, were we post news and you can also get information when new stuff is committed to the repository. The Github page also contains a sample repository that currently host a couple of applications that shows to use the Convention Model, a way of using with with conventions instead of attributes, and the InterceptionCatalog, a catalog that uses strategies and interception hooks – this enables things like Aspect-Orientated Programming with MEF and it is the catalog that powers all other catalogs in MefContrib. So what can MefContrib 1.0.0.0 add to your MEF toolbox? InterceptionCatalog – This catalog lets your decorate another catalog and provide ways of adding new parts and exports at runtime, as well as to provide it with a set of interception hooks that you can use to modify parts. We ship with hooks for using DynamicProxy and LinFu together with MEF, which opens up for a completely new world of things you can do with MEF. Piotr Włodek has a very nice introduction post to the InterceptionCatalog that shows how you can use DynamicProxy to weave in new behavior at runtime. ConventionCatalog – Removes the need to use attributes when defining parts, import and exports in MEF. You use a convention based approach to let MEF know how to compose the application. This offers some great powers with very little configuration. One interesting thing it also enables you to do, it so import and export things you do not have the source code for (for adding the attributes) or that was previously designed to work with MEF FilteringCatalog – Enables you to use predicates to filter our discovered parts at runtime. For example, filter out parts that carries a specific piece of metadata or has a certain creation policy. Since it is based on a predicate you can write what ever filter you want GenericCatalog – Provides open-generic support in MEF! Unity integration – Provides a bridge between Unity and MEF, enabling both to work together to satisfy dependencies FactoryExportProvider – A custom export provider that let you take charge of the instance creation in MEF. You can read more about the FactoryExportProvider at Piotr Włodek's blog There are a lot more of small extensions and helpers included in the solution, so get the bits and try it out for your self. Oh, and all of the above also works great in Silverlight, using our Silverlight version of MefContrib! If you have anything you would like to contribute to MefContrib please let us know! We are also looking for people to participate in other formats other than contributing features. Like most open-source projects, there are a lot things that needs more attention than we are able to give it at the pace that it deserves (not neglecting, just slow to get there). Documentation (both code and features) Bug reports Bug fixes Feature requests Feature implementations Test coverage Code quality (FxCop, NDepend and so on) Sample applications So there you have it, a ton of MEF goodies for you to play with. We hope you like it and please let us know if you use the stuff in any products, would be awesome to know! Permalink | Leave a comment  » [Less]
Posted almost 15 years ago
Shortly after announcing the first drop of Nancy, my friend Mark Nijhof asked me “one thing that keeps sounding in the back of my head is; why not use Sinatra instead?”The short answer to this is; If it makes more sense for you and your project to ... [More] use Sinatra, then you should! However it is my experience that it’s not always that black and white when it comes the question “why don’t you use…?”. The answer really depends on what type of developer you are and the surrounding circumstances.Now let me clarify that Mark said he liked Nancy and that this was just a feeling he got while reading the post. This post has less to do with Mark and more to the discussion about “why don’t you use…?” since it is a repeating topic that pop back up every now and then.Say you are doing contract work, either on your own or at an agency. Here is depends on two things; do you specialize at delivering software on a specific platform or do you target a wider audience? Second, does the client have any demands on the platform and technology stack that you use?  So the ability to choose technology stacks in either of these situations should be self evident. If you are in a position where both the client and employer lets you choose the best stack for the project then you are in a sweet position!Imagine you are working in-house at a product company, should you just be able to pick any technology stack that you feel is the best for the task? Would you let your employees do that if you were the CEO? There are additional costs for training team members and you risk in getting way too many go-to-guys, the people that have a passion for a certain technology. And what if some of these guys were to leave? Then what? It’s not that easy to recruite people with cross-technology stack experience, especially those with senior skills in all of them.I know that at the place where I work it would not be a good idea to start building out products using django, ruby, scala – whatever – simply because our organization could not handle it. At least not today. And who knows what it will be like in another year? A couple of years ago we wouldn’t have been able to use Kanban efficiently, but we matured over time and today we use it with great success.I am a firm believer of “use the right tool for the job”, but I am very aware that it’s not only the framework or programming language that defines what “the right tool” is, there are lots of organizational and surrounding factors that all have a play in that. Permalink | Leave a comment  » [Less]
Posted almost 15 years ago
For a couple of weeks I have been keeping myself busy with open-source stuff. One of the things has been to spike out a web framework idea and later on turn it into a real project. The project is inspired, but not a clone, by the Sinatra web ... [More] framework built on Ruby. The name, Nancy, is a reference to Nancy Sinatra, the daughter of Frank Sinatra. There are quite of lot of things that I want to put into the framework, but it is functional in its current state. One of the goals for Nancy is to make it run on other environment and platforms, other than ASP.NET / IIS and there are spikes taking place to run it on Mono with FastCGI, making it possible to run on a bunch of other platforms. However, although this is the goal, the current source code does not provide any helpers to make that possible. Right now it only ships with an IHttpHandler that acts as an adaptor between ASP.NET / IIS and the Nancy engine. The project is built using C# and makes use of xUnit, MSpec and FakeItEasy The key component in a Nancy application is the modules. This is where you create actions, which are handlers for a given request type at a given path. Let me show you what I mean public class Module : NancyModule{ public Module() { Get["/"] = parameters => { return "This is the site route"; }; Delete["/product/{id}"] = parameters => { return string.Concat("You requested that the following product should be deleted: ", parameters.id); }; }} What you are looking at here is the foundation of a very small application that will responde to GET requests to the root URL of the site, and DELETE requests to /products/{id} where {id} is a parameter placeholder. All parameters will be captured and injected into the action, like you see with parameters.id.The entire route handling mechanism is swappable, so you could write your own handler that were able to interpreted the route syntax that you prefer. A module can also be declared with a module path, meaning that all action routes, that you declare in the module, will be relative the module path. For example if you were to do public class Module : NancyModule{ public Module() : base("/foo") { Get["/"] = parameters => { return "This is the site route"; }; Delete["/product/{id}"] = parameters => { return string.Concat("You requested that the following product should be deleted: ", parameters.id); }; }} The the application would respond to requests sent to /foo and /foo/product/{id}. You can of course have as many modules as you want. Nancy will detect them all and figure out which action that should be invoked. There are also some nice ticks in there for return values. In the examples above you get the impression that you are expected to return a string, and this is not the case. In fact each action returns an instance of a Response type. What you are seeing is the result of some implicit cast operators. There are a couple of them declared public class Module : NancyModule{ public Module() { Get["/"] = parameters => { return "Returning a string"; }; Get["/404"] = parameters => { return 404; }; Get["/500"] = parameters => { return HttpStatusCode.NotFound; }; }} All of these will work and send back a valid HttpResponse (including headers) to the client. You can of course explicitly return a Response instance which opens up for some nice customization. A module in Nancy also declares a pair of properties called View and Response. Both of these have an interface return type and each of them are empty marker interfaces that you can use to wire up extension methods. The View property is meant to be used for view engine integration and in an unpublished spike (still needs some more work) I’ve wired up the http://www.sparkviewengine.com/ so that Nancy is able to process spark files. This is an example of what that looks like public class SparkModule : NancyModule{ public SparkModule() { Get["/user/{name}"] = parameters => { return View.Spark("user.spark", parameters); }; }} Of course all of this is work in progress and the syntax might change. The goal is to support all of the popular view engines and if you are up to the task of implementing support for one of those, please let me know – I would love the help!The Response property is meant to be used for extensions that help format the response in different ways. A test I have running locally is an extension method that enables me to return json formatted data. public class JsonModule : NancyModule{ public JsonModule() { Get["/user/{name}"] = parameters => { return Response.AsJson(someObject); }; }} Of course I would like for Nancy to ship with a healthy set of these response helpers, so feel free to chip in! Oh, there is one last property that you can use and that is the Request property which gives you access to the current request information, so that you can use it from inside of your action handlers. Right now it is limited to the requested path and verb, but the goal is to have a rich representation of the current request – what stuff would you like to see in it? public class RequestModule : NancyModule{ public RequestModule() { Get["/"] = parameters => { return string.Format("You requested {0}", Request.Path); }; }} One thing I would like to mention about the action handlers and their routes. If there are two or more routes that are a match to the current request, Nancy is going to select the one that has the most matching static path segments before a parameter placeholder is reached (but all segment has to be filled!). What does this mean? Take the following routes /foo/{value}/foo/bar/{value} The first route has one static path segment (/foo) and the second one has two (/foo/bar). So for a request to /foo/bar The first route will be selected, but for /foo/bar/baz the second route will be selected. It also important to understand that in Nancy, all path parameters are greedy, not like in ASP.NET MVC where you can have one greedy (indicated by a star *) and has to be at the end. If you define a route /foo/{value1}/bar/{value2}you can invoke it with /foo/this/is/some/segments/bar/and/then/some/more and you will end up with {value1} = /this/is/some/segments{value2} = /and/then/some/more Of course, like I said before, this is how the default route handler works and if you don’t like it you can write your own, all you have to do is implement a single interface and tell Nancy to use it. So before I end this post, let me tell you about some of the things that are planned to be included in Nancy as soon as possible A much richer request object. Nancy uses it’s own Request object and is not tied down the the one found in ASP.NET. I want to support a rich and easy to use model for request information. If you have any suggestions on the structure of this object, please let me know The ability to inject dependencies into Nancy modules. I want you to be able to wire up Nancy to use your favorite IoC and have Nancy resolve constructor dependencies of Nancy modules Conditions on actions. I want to add an optional predicate on actions like Get[“/foo”, () => somePredicate], to give Nancy the ability to select actionsat runtime. For example you might have two identical actions define, but you add a predicate on one of them that made sure that it was only selected if the client was a mobile device. Actions that has a predicate defines should have higher priority than those that do not View engine integration. Like I said, the Nancy modules comes with the View property, which is of the type IViewEngine, where you can hook up view engine support. All you need is an adapter that returns a string (or a Response instance). Please let me know if you want to chip in and help wire up one or more view engines Ship with a nice bunch of response formatters. These are created by attaching extension methods to the IResponseFormatter interface, which is the property type of the Response property on a Nancy module. I think the formatters should follow a naming convention where you name them As<something> Self-composed framework. What is mean with this is that I want to build Nancy on top of a tiny, internal, IoC that is used to compose the framework at runtime. It should be exposed in a simple way so that you could swap out components (such as the route matcher, or module discovery mechanism) as you please Request and Response interception. The idea is to provide a lightweight way to intercept Nancy requests before they hit the Nancy application and let you either pass the request to the Nancy application or prematurely send back a reply. Combines with the ability to intercept Responses sent by the Nancy application, it should give a nice way of extending Nancy with features like logging and caching. You can think of this as a sort of IHttpModule mechanism NuGet presence Command line (powershell?) support for spawning up a Nancy application project Provide self-hosting somehow There are a bunch of other stuff I have in my head, but I have to give them some more thought to distil proper ideas from them. But please, let me know if you can think of anything more! I want to keep Nancy lightweight and easy to use, so it will probably never be as open-ended as ASP.NET MVC, FubuMVC or Manos de Mano – but we’ll have to wait and see!You can find the source code at my Nancy repository at GitHub. You can also reach me on Twitter at @TheCodeJunkie. If you want to talk about Nancy drop me a line in the comments or on Twitter and we can move onto e-mail, gtalk, skype or messenger if needed! I hope you like where Nancy is going! Permalink | Leave a comment  » [Less]
Posted about 15 years ago
Last week was the at the 6th annual installment of the Øredev developer conference in Malmö, Sweden. I was able to attend Wednesday thru Friday and had the opportunity to listen to a lot of great talks and have a lot of interesting conversations ... [More] with the attendees. Being on its 6th year means that the conference should have had time to find its style and settle in – and let me tell you right now that it has defiantly done that. The theme for this years conference was Get Real, that is described by the organizers as ”will shine a light on how to stay in balance, between today’s realities and tomorrow’s possibilities while the universe is in constant motion.”. Their way of doing that was by providing a total of 13 awesome tracks which were Java, .NET, Smart phones, Architecture, Cloud & nosql, Patterns, Web development, Social media, Agile, Collaboration, Realizing business ides, Software craftsmanship and the Xtra(ck). The Xtra(ck) was this years little experiment and provided alternative sessions such as “Understanding Hypnosis”, “The taste of coffee”, “The language of MIDI and its application”, “Photo walk!” and many many more. I never attended any of these, there were just too many things I wanted to attend while at the conference, but hopefully they’ll keep the concept for next year and maybe I will get the opportunity to checkout a couple of sessions then. This yeas 1400 people attended the conference, an impressive 40% increase in people compared to last year where the head count was about a 1000 people. That is quite an effort and the best part is that you did not notice any rapid growing pains at all, everything felt just as well organized as I’ve come to expect out of this conference. One thing that worries me up front was the change of venue for this years conference. The change was necessary since the old venue is being knocked down, so that was not a viable option this year. The new venue was the Slagthuset (Swedish wikipedia entry) and it was able to house all of the attendees without feeling over crowded. The one complaint I (and probably the rest of the attendees) have was that, because of how the large halls of the building were split into temporary session rooms by the help of temporary walls, the sound of the neighboring sessions bleed over to the one you were attending. It was not bad to the point where you were unable to follow along, but it was definitely noticeable. I hope this is resolved next year! The keynotes Each of the conference days started of with a keynote, and If I had to pick one area where this years conference really excelled, I would have to say the keynotes. This year they definitely out did themselves and lines up an awesome lines of speakers. Dr. Jeffery Norris of NASA Jet Propulsion Lab, was the Wednesday speakers and talked about Mission-Critical Agility. In his talk he told the story about Alexander Graham Bell and how he invented the telephone. He talked about how Vision, Risk and Commitment are important parts of Mission-Critical Agility and intertwined those into the story of Alexander Graham Bell. He also talked about how NASA committed to being first with putting a man on the moon and how the lunar project ended up about not choosing the solution that seemed the most simple at first, but actually ending up picking the one that sounded like crazy talk. In the part about the lunar landing he made use of augmented reality with the help of ARTToolKit to visualize the complexity of the task. Check out the this youtube clip to see parts of what he did. John Seddon had been invited as the Thursday keynote speaker and talked about Re-thinking IT. It can more of less be summed up as talk on how we as developers let IT get in the way of the real problems. How we often propose solutions to problems that do not yet exist. He talked about how we should focus on solving the business problems first and then moving them into IT, and not the other way around. A very well delivered talk, completely without slides! All of the attendees got his book Freedom from command and control as part of their Øredev goodie bag! Nolan Bushnell spent his time as the Friday keynote speaker by talking about The then mega software project for the next 20 years. This is the same guy that founded the Atari Corporation and has been called the father of the video game industry. More or less a legend In our field and a very good public speaker. He talked about some of the big projects that we will be facing in the next ten years and the impact they will have in our industry and as a society at large. The sessions With a 3-day conference, consisting of 8 parallel tracks each day and 5-6 sessions each day (Friday had one less session than the other two days) it would be too much information to tell you exactly which sessions I attended and what my reflections of them were. For the most part all the sessions were great, there were only one that I walked out of. There was always something coming up on the agenda that you wanted to attend and in many cases there were conflicting sessions. The good news is that all sessions were recorded and will be made available on the Øredev website over the next couple of weeks. So for me the conference is not really over, there are a bunch of sessions I want to watch and a couple that I would like to re-watch again! I was also happy to see that the listener participation was quite high this year. There were many sessions where people asked a lot of questions during the talk and occasionally stepped up to help answer questions that caught the speakers of guard. I guess this is what you get in an open environment such as the one that Øredev delivers. As always the conference uses the Red-Yellow-Green session evaluation system. Basically this is a very simple way for attendees to tell the speaker what they thought about the session. When you leave the room you are given the option of placing a red, yellow or green card in a box and that is it. I think the colors are self explaining so I will not go into any details on their meaning. I know that I personally hate filling in evolution forms, especially when you are going to attend almost 20 sessions in 3 days. The highlight To me, the single biggest highlight of this years conference is neither the sessions or the keynotes, even though both of them were excellent. The highlight this year is the fact that I got the opportunity to meet up with a lot of people that flew in from other parts of the world. Some I knew from before, others I have only known on my twitter account or on IM for a long time now. Some of the people that I got to hang out with this year were Glenn Block, Jeremy D. Miller, Hadi Hariri, Philip Laureano, Rob Ashton, Brad Wilson whom all are awesome developers and people I have great respect for. There were a lot of good discussions and casual chit-chat during the 3-days at the conference. However, the single biggest highlight was not part of the actual conference it self. On Monday I had the opportunity to take a day of work and spend the day with a long time friend, Glenn Block. Him living in the US and me living in Sweden does not give a lot of opportunities to hang out for a full day. Fortunately Glenn flew in early to the conference and took the train to the city where I live. He got the opportunity to meet my wife and kids, have a look around the city, visit the place where I work and see some of the things we do, get an introduction to the FakeItEasy mocking framework (developed by a guy I work with, Patrik Hägne and used in our daily work) and show us some of the REST stuff he is working with. All in all, an awesome day. Summary To sum it it, this years conference was great and I already look forward to next year! If you get the opportunity to visit the conference I would highly recommend that you take it. It is quickly getting known for a high-quality conference and I do not see that ending anytime soon. So thank you to everyone, the organizers, the speakers, the attendees and even all the people that works behind the scenes to make who made this possible, for making this year just as awesome as expected! I will see you next year! Permalink | Leave a comment  » [Less]
Posted about 15 years ago
I recently attended this years Øredev conference and one of the things I had the good fortune of doing was to meet-up with a long time twitter friend, Philip Laureano. One of the days me and Philip started talking about a previous discussion he ... [More] had with another attendee (whom shall remain nameless since I do not know the full details of his exact opinions). Anyway, the short version is that the person suggested that classes should be sealed by default, or at least have the developer explicitly state if the class should be sealed or closed. My immediate reaction was that this was a terrible idea and that I had been struck down too many times by sealed classes before. But then I started thinking that maybe it was not such a bad idea after all. Maybe it even was a sign of a well designed class and that more developers would be better of by sealing their classes. Now let me inform you that I am still on the ropes about this, but I would like your thoughts on it. In fact I am hoping that the most interesting part of this post will end up being the discussion in the comment section. So when you take a moment to think about the S.O.L.I.D principles, most specifically the Open-Closed Principle and Dependency Inversion Principle, a long with the old design principle of ‘Favor object composition over class inheritance’ then maybe it is not such a bad thing after all. Throw in interfaces into the mix and program to an interface and not an implementation, and it will enables you to create different branches if needed. If your classes can flourish while being sealed, chances are that you have some pretty nice structures code in your hands. There probably are some legit reasons to not seal classes at time, despite the reasoning above, so I am not going to be definitive and say that is never the case. Voice your thoughts in the comments and let us see where this ends up – who knows, I might be left standing as a fool! As always, you can find me on twitter by the name of @thecodejunkie Permalink | Leave a comment  » [Less]
Posted about 15 years ago
I am using msysgit for my Git work on Windows and for quite a while I have been wanting to move the global .gitconfig file from its default location into my dropbox folder. The reason for this is probably obvious, I do not want to have to make the ... [More] same configuration changes on multiple machines to get a homogenize Git environment.So I set out to resolve this but despite numerous stops at my local Google page, I was unable to find any information on how to make Git look for the file in a different location, it always went to look for the file at %USERPROFILE%\.gitconfig. As a last resort I asked Joshua Flanagan if he had any idea on how I could configure Git or msysgit to make this possible. Unfortunately he had no idea, but he did however suggest that I try using mklink, a command line utility available in Windows Vista and later.mklink enables you to create either symbolic or hard links for files or directories. What I needed was a symbolic link, which simply explained is like a shortcut to a file but the difference is that the file system will resolve the symbolic link to the real file, where as a windows shortcut is a file with information in it.To create a symbolic link for your .gitconfig file do the following Copy your .gitconfig file from the %USERPROFILE% directory, to the directory where you want to store it instead Delete the .gitconfig file in the %USERPROFILE% directory Start cmd.exe as Administrator Enter mklink .gitconfig path\to\your\.gitconfig (notice the actual filename needs to be in the second parameter or you will link to the folder it is stored in) You should now get a message that a symbolic link was created between the two files. Try making a change in either of them and see that the change is reflected in the other one – this is because you are always editing the same file since the file system resolves the symbolic link into the real file. Permalink | Leave a comment  » [Less]
Posted about 15 years ago
Have you ever stopped to think about the industry you have choose to work in (I’m bluntly assuming that if you are reading this you are working in the software industry in some way)? I would call it one of the most complex industries in the world. ... [More] Think about it. We are working in an industry that is evolving at an incredible pace, contains an incalculable number of technologies, frameworks, best practices and constantly redefines the definition of how things should be done in the best possibly way. It’s the industry that makes the rest of the world tick. Daunting really, if you think about it. A while back I read a couple of posts by Gil Zilberfeld (here and here) where Gil talks about the responsibility that vendors such as Microsoft plays in the role of securing the quality of the work that is produced in our industry. While I think I see the points  Gil is trying to make, I think he misses the beat a bit and I have a hard time agreeing with the conclusions he draws. The way I see it there are two types of developers; those that are just in it to pay the bills and those that consider themselves as craftsmen. If you consider yourself a craftsman then you should already be aware that you are responsible for your own faith and actions in this industry. But, if you are just in it to pay the bills then you are probably also looking to do so by doing the least amount of work and that includes looking for information on how to solve a particular problem or how to apply a technology onto your stack. So if you are one of the developers that are only looking towards Microsoft (or the relevant company for the technology stack you are working on) is it their fault if you implement something in a way that could be considered bad? Of course not! Sure there are a lot of outdated and down right poor samples at the Microsoft (or relevant company) website and their idea on how certain things should be solved are bound to differ from others (and that’s definitely not to say that there isn’t good contents, there are a ton of it). However, if you rely on a single source of information, you are always going to get an opinionated view. Take my word on it (right?). Doctors reads medical journals, publishes research papers, attend conferences, network with colleagues and make sure they stay up to date with the latest in their field. I’m pretty sure you are happy that they spend all of this time to make sure they can provide the best possible care and treatment when someone are in need of their services. I know I am. Just as with any other profession, developers are responsible for their own education, for honing their skills in the craft that they have chosen to practice. In order to keep up in an industry that evolves at the speed of light you need to invest in yourself. The code you write today should be some of the best you have ever written, while a year later you should be considerably less excited about its quality. It’s a sign of growth. That you’ve continued to move forward as a craftsman, that you skills have been honed and broadened during the past year. So what about the tools? Do we really rely on them too much to get the work done? I would say, definitely not! But again you have to specify just exactly what you are talking about when talking about tooling. If you rely on visual designers, drag and drop, wizards and the likes to to the majority of your work, then yes you are probably relying too much on your tools. Odds are that you will have a hard time to get anything outside of standard behavior to run and there will be pain points when you need to debug. However you would do yourself (and your employer) a huge disservice if you did not make it your goal to know the tools in your toolbox as good as possible. What’s wrong with knowing how to use the debugger, the IDE and tools like ReSharper as good as possible? Used correctly they will have a huge impact on productivity. Make sure you know the finer details of the tools and make them work for your and not the other way around. Yes, sometimes tools do get in the way of the goal, even slow you down, and when that is the case, don’t use the tools! Tools are there to help you when you need them, not to act as a crutch you always have to lean on so you don’t fall on your ass. Well there you have my thoughts on the subject. It’s always up to the developer, not the companies. Always. Permalink | Leave a comment  » [Less]