Here's a guide on how to create a two - branch TFS2010 solution. It's as much a reminder for me as it is information for anyone else who might find this useful.
1. Create a new Team Project on the TFS server. Mine is called 'Baseline'
2. Create yourself a new folder on your C: drive called 'Visual Studio 2010'. Create another folder within this to store your project files, i.e. c:\Visual Studio 2010\Baseline.
3. Open the Solution Control Explorer (I'll refer to it from now on as TFS Explorer) for the Baseline Team Project. Use the Workspace drop down to create a new Workspace and call it 'Baseline'. Set the TFS working folder to the top level folder of the TFS project (i.e. Baseline), and the local folder to whatever you created in 2.
4. Create two new folders under the Baseline top level in TFS, call them Dev and Release.
5. Create a new VS2010 BLANK solution. Call it Baseline and place it into the Dev folder. If this has worked proerly you should now be able to see c:\visual studio 2010\baseline\dev\baseline in windows explorer, and see the .sln file and the .vscc file in this folder.
6. Add a new web project to the blank solution. Call it what you want (I've called mine BaselineWeb) and make sure it gets created inside the folder structure in 5. So you should have c:\visual stuido 2010\baseline\dev\baseline\baselineweb\*.* where *.* is all the .aspx, web.config files etc. for the website.
7. Next you need to check all this stuff into the TFS server.
8. Right click on the Dev folder in TFS explorer and select 'Convert to Branch'. The icon for the Dev folder in TFS explorer should have changed from the standard yellow folder to a white branch icon.
9. You now need to Branch the Release folder. Right click on the newly branched Dev folder and select Branch. The target needs to be $Baseline/Release/Release (i.e. you're adding a new Release folder under the original one). Commit this to the server. The Release/Release folder should also have had its icon replaced with a Branch icon.
10. You should be good to go now. You can merge the content of the Dev folder into the Release folder and vice versa.
Friday, 31 December 2010
Thursday, 30 December 2010
Volunteering with Community Groups (2)
The 'mailshot' that I sent out on Boxing Day has received a lot of interest. I have already been contacted by half a dozen different organisations in the local area who are interested in getting some help. This probably means that I won't be able to get to them all straight away but I'm thinking about some other ways in which I could still help without devoting time to design / development work.
Improving Database Development
Going hand in hand with my exploratory work on Team Foundation Server 2010 I have been thinking recently about how my team can improve the quality of our database work too. We tend to keep things simple with databases by using just tables and stored procedures. The stored procedures are always extremely simple - either a read, or an insert, or an update. We don't store any advanced application logic; I have always found T-SQL difficult to program in and we use VB.NET for all the complicated stuff.
We're facing a few problems. All of us are working on one database at the same time which I have read in a couple of places (including K. Scott Allen's excellent blog on the subject) that this is a bad thing. We also have different versions of the database without really knowing what the differences are between them and it's difficult to keep track.
After reading around on the subject I'm planning on introducing the following changes:
1. I would like to give each developer a sandbox area in which they can work in isolation. I am going to look at how this can be achieved through the use of a virtual desktop.
2. I want to introduce a scheme repository that developers write change scripts to and can access to get the latest database build. Again I have taken inspiration from K. Scott Allen. The repository will be a single database table with some information about the version, update date, author etc. and also the location of the text file that contains the script. So the first entry will be 1.0 and will be the build script for the database as it stands at the moment. Every time a developer makes a change they will use a custom app (I need to write this) that will create a new entry in the database and also write the script to a text file. I then need to have a process that will let someone get the latest build by creating one giant script encompassing the baseline and all changes made. The developer could also pick a specific version to build up to, and it would only include all changes up to that build, leaving out any newer changes.
I think this will be a big help for us in terms of organising and moving forward. Hopefully I can post some more on this subject when I get it all up and running.
One last point; I know that Team Foundation Server 2010 supports database projects that can source control all the objects in a database but we don't have the correct version - it's not included in the Professional license (I think Premium and Ultimate only).
We're facing a few problems. All of us are working on one database at the same time which I have read in a couple of places (including K. Scott Allen's excellent blog on the subject) that this is a bad thing. We also have different versions of the database without really knowing what the differences are between them and it's difficult to keep track.
After reading around on the subject I'm planning on introducing the following changes:
1. I would like to give each developer a sandbox area in which they can work in isolation. I am going to look at how this can be achieved through the use of a virtual desktop.
2. I want to introduce a scheme repository that developers write change scripts to and can access to get the latest database build. Again I have taken inspiration from K. Scott Allen. The repository will be a single database table with some information about the version, update date, author etc. and also the location of the text file that contains the script. So the first entry will be 1.0 and will be the build script for the database as it stands at the moment. Every time a developer makes a change they will use a custom app (I need to write this) that will create a new entry in the database and also write the script to a text file. I then need to have a process that will let someone get the latest build by creating one giant script encompassing the baseline and all changes made. The developer could also pick a specific version to build up to, and it would only include all changes up to that build, leaving out any newer changes.
I think this will be a big help for us in terms of organising and moving forward. Hopefully I can post some more on this subject when I get it all up and running.
One last point; I know that Team Foundation Server 2010 supports database projects that can source control all the objects in a database but we don't have the correct version - it's not included in the Professional license (I think Premium and Ultimate only).
Sunday, 26 December 2010
Harrogate Premier Bathrooms
Today I published my first website for a client (ok, so the client is my Dad and his business partners). Small three page site just to get them going but the design is neat and I have some good ideas to implement in 2011. Here's the link: Harrogate Premier Bathrooms.
Volunteering with Community Groups
Today I sent an email to around 60 community groups based in Warrington to ask if they would like any help from me on a volunteer basis. I have tried looking for volunteer work before through the IT4Communities website but not had any takers.
I understand from working with community groups and small businesses on the Knowsley Community Tradenet project that time is the scarcest resource for the people who run these organisations and I hope that by offering my time and skills I can maybe help one or two IT - based projects get off the ground.
I took a list of contacts from the Warrington Community Information Directory.
I understand from working with community groups and small businesses on the Knowsley Community Tradenet project that time is the scarcest resource for the people who run these organisations and I hope that by offering my time and skills I can maybe help one or two IT - based projects get off the ground.
I took a list of contacts from the Warrington Community Information Directory.
Wednesday, 22 December 2010
Team Foundation Server 2010 (3)
Came across a problem referencing a .dll today. The application that I am migrating into TFS2010 as part of the learning process relies quite heavily on the Ajax Control Toolkit for its UI and the .dll has to be included as part of the solution.
Unfortunately it's not enough to reference the library as was the norm in VS2008 because the build server couldn't see it. I tried creating a folder called C:\Build on my laptop and the build server and placing the file in both these locations but that didn't solve it either. Fortunately I found the answer about half way down this thread; you have to include the .dll in the workspace structure of the solution otherwise it won't work. So I created a folder called References inside the project folder of the branch I'm trying to build, added the .dll to it and then referenced the .dll from this location in my VS2010 solution. Worked like a charm.
It looks like TFS ignores any .dll you add this way for build purposes; I guess you don't want your build failing because the 3rd party .dll you are using has an unassigned variable lurking. I then tried merging the dev branch into main to see if TFS was smart enough to replicate the referencing and it was, there is now a folder called References in the Main workspace and the solution file for Main contains a reference to that file. Very slick indeed.
Unfortunately it's not enough to reference the library as was the norm in VS2008 because the build server couldn't see it. I tried creating a folder called C:\Build on my laptop and the build server and placing the file in both these locations but that didn't solve it either. Fortunately I found the answer about half way down this thread; you have to include the .dll in the workspace structure of the solution otherwise it won't work. So I created a folder called References inside the project folder of the branch I'm trying to build, added the .dll to it and then referenced the .dll from this location in my VS2010 solution. Worked like a charm.
It looks like TFS ignores any .dll you add this way for build purposes; I guess you don't want your build failing because the 3rd party .dll you are using has an unassigned variable lurking. I then tried merging the dev branch into main to see if TFS was smart enough to replicate the referencing and it was, there is now a folder called References in the Main workspace and the solution file for Main contains a reference to that file. Very slick indeed.
Tuesday, 21 December 2010
Team Foundation Server 2010 (2)
I've spent the majority of the day trying to teach myself how to set the server up and how it works with Visual Studio 2010. Fortunately it's all pretty intuitive and I think I have it figured out to a level where the team can make use of it. It is going to save us a lot of time and make us work better.
There were a few different topics that I looked at. I followed Jason Zander's tutorial on his blog and that taught me how to do the install, create a project in TFS and then add code to it from VS2010 and make changes. What this doesn't go into is two slightly more advanced topics - using the Source Control Explorer and Branching. It also skips over Workspaces (basically the 'Set Working Directory' option that was in Visual Source Safe).
Branching allows you to create multiple copies of the same application and keep them in different states simultaneously. The simple example I saw on Codeplex has a three Branches - Main, Dev and Release. The Dev Branch allows the dev team to work on one copy of the application while the Main and Release Branches are kept 'clean'. When the dev team has completed something and tested it the changes can be merged into Main and Release, built and published. Another benefit of this is that if something urgently needs doing to the Release version you can implement the change on Release and propagate the change back down to Main and Dev. This seems like a really powerful tool.
The Source Control Explorer is what manages these Branches. Once you have created the Main Branch you can populate it with a blank solution from VS2010 and then Merge this into the other Branches. I guess there's maybe room for a Test Branch too.
Creating a Build is really easy (once you figure out how to give the correct permissions to write to the build folder on the server) and it drops out a compiled web app which you can quickly drop onto a server. You can specify builds for the different Branches and schedule these to run multiple times throughout the day. I guess the dev build needs to happen pretty frequently, the release build could be run manually to save the server overhead.
The irritations caused by Source Safe are pretty much gone. Tomorrow I'm going to install VS2010 on another development laptop and see how easy / difficult it is to get a copy of the source code and make changes to it.
There were a few different topics that I looked at. I followed Jason Zander's tutorial on his blog and that taught me how to do the install, create a project in TFS and then add code to it from VS2010 and make changes. What this doesn't go into is two slightly more advanced topics - using the Source Control Explorer and Branching. It also skips over Workspaces (basically the 'Set Working Directory' option that was in Visual Source Safe).
Branching allows you to create multiple copies of the same application and keep them in different states simultaneously. The simple example I saw on Codeplex has a three Branches - Main, Dev and Release. The Dev Branch allows the dev team to work on one copy of the application while the Main and Release Branches are kept 'clean'. When the dev team has completed something and tested it the changes can be merged into Main and Release, built and published. Another benefit of this is that if something urgently needs doing to the Release version you can implement the change on Release and propagate the change back down to Main and Dev. This seems like a really powerful tool.
The Source Control Explorer is what manages these Branches. Once you have created the Main Branch you can populate it with a blank solution from VS2010 and then Merge this into the other Branches. I guess there's maybe room for a Test Branch too.
Creating a Build is really easy (once you figure out how to give the correct permissions to write to the build folder on the server) and it drops out a compiled web app which you can quickly drop onto a server. You can specify builds for the different Branches and schedule these to run multiple times throughout the day. I guess the dev build needs to happen pretty frequently, the release build could be run manually to save the server overhead.
The irritations caused by Source Safe are pretty much gone. Tomorrow I'm going to install VS2010 on another development laptop and see how easy / difficult it is to get a copy of the source code and make changes to it.
Team Foundation Server 2010 (1)
I have been working as part of a team of six developers for just over a year now. This has been the first time in my career that I have been part of such a team and in that time we have created a lot of components. These are scattered across a range of applications - web apps, web services, databases, SSIS packages and standalone .exe files that we run from servers on a schedule.
We're at the stage now where controlling change amongst the team is difficult. We are using Visual Source Safe as our source code repository and it's not brilliant. I'm never 100% confident that when I try and build a new version of the application that I'm working on and publish it to our test environment that it will have all the most up to date code in it. So instead of moving forward with functionality I'm spending time bug fixing changes so that I can get a demo working for customers.
I'm also finding the whole build process very unsatisfactory. At the moment I have a Visual Studio 2008 solution with a dev project that we're all contributing to and a test project that contains just the files I need to run a particular app. Creating a build involves knowing which of the files have changed, copying them into the test project, remembering to change the namespaces and then publishing to the test server. It's a slow and irritating process.
I also have the attitude of team members to deal with. Communicating with people when you all sit in a space that's about 10m square should be simple but it's not. I'm not getting told when things change and only find out about changes when I try and publish a version of the app and it stops working because a datatype has changed in the database.
All this is stopping us progressing. Fortunately we have just acquired some MSDN developer licenses that allow us to install Visual Studio Pro 2010 and Team Foundation Server. I have been reading plenty of stuff on the web about how these development tools will help to resolve the problems that I'm facing and move the team forward. So I'm going to spend the next few months concentrating on developing the practices and procedures necessary to improve our efficiency and quality. I would also like to explore the opportunities that Agile programming will bring to the team and see how well it fits with what we do because I think we can learn from that too. The next set of blog posts will all record my findings as I carry out this piece of work.
We're at the stage now where controlling change amongst the team is difficult. We are using Visual Source Safe as our source code repository and it's not brilliant. I'm never 100% confident that when I try and build a new version of the application that I'm working on and publish it to our test environment that it will have all the most up to date code in it. So instead of moving forward with functionality I'm spending time bug fixing changes so that I can get a demo working for customers.
I'm also finding the whole build process very unsatisfactory. At the moment I have a Visual Studio 2008 solution with a dev project that we're all contributing to and a test project that contains just the files I need to run a particular app. Creating a build involves knowing which of the files have changed, copying them into the test project, remembering to change the namespaces and then publishing to the test server. It's a slow and irritating process.
I also have the attitude of team members to deal with. Communicating with people when you all sit in a space that's about 10m square should be simple but it's not. I'm not getting told when things change and only find out about changes when I try and publish a version of the app and it stops working because a datatype has changed in the database.
All this is stopping us progressing. Fortunately we have just acquired some MSDN developer licenses that allow us to install Visual Studio Pro 2010 and Team Foundation Server. I have been reading plenty of stuff on the web about how these development tools will help to resolve the problems that I'm facing and move the team forward. So I'm going to spend the next few months concentrating on developing the practices and procedures necessary to improve our efficiency and quality. I would also like to explore the opportunities that Agile programming will bring to the team and see how well it fits with what we do because I think we can learn from that too. The next set of blog posts will all record my findings as I carry out this piece of work.
Graduated!
Last week (Wednesday 15th December, 3pm to be precise) I attended my graduation ceremony at the Liverpool Philharmonic Hall and received my certificate from the vice chancellor. I am now officially a Master of Science! It was a great day and made a nice change to be in a room full of happy people all celebrating something.
Wednesday, 27 October 2010
ASP.NET ASCX controls and ViewState
I've written a scripting engine into the ASP.NET app that I am developing. It's pretty neat - the user can choose from a list of scripts to load and work through, the script is loaded into a panel and they can follow it through.
Identified a problem yesterday whereby the second and subsequent scripts that got loaded had a strange little error - the first server control in the script didn't work on its first event (i.e. click, selectedindexchanged). It would work the second time the control was used - but that's no good for the users.
So I set about trying to solve it. After about seven hours of reading, trying things out and getting really frustrated I did solve it thanks to a couple of articles on the web. I made a load of mistakes so hopefully someone will read this and avoid making the same ones.
First, I had the scripts declared globally, i.e. Dim scriptname As Global.namespace.scriptname. Wrong wrong wrong!
What you need to do is create a ViewState called LastLoadedControl and set this to be the physical path of the control. This will reside in memory and allow you to persist the control between postbacks of the parent form. Then create a Private Sub called LoadUserControl(). This should check to make sure that the LastLoadedControl isn't null, then load the control and attach it to the Panel / Placeholder that you're using to display the controls.
You need to call this LoadUserControl method in the Page_Load event every time. Also, in the method that you're using to drive the user interaction (i.e. the list of scripts to pick from) you need to declare the physical path of the script, set the LastLoadedControl ViewState to this value, then call the LoadUserControl method.
Check out the first example in this blog (it's in C# but easily converted) for a really good example: .
A couple of warnings. First, even though the example above solved my problem, it was causing an error whereby the aspx page was trying to load the same control into the Panel / Placeholder twice and falling over as a result. What you have to do in the LoadUserControl method is specifically set the name of the control otherwise the ViewState of the control (i.e. its values) won't carry across between postbacks. Obviously you can see a scenario where you're creating the same control twice. To get around this you have to remove the control from the Panel / Placeholder every time. The example says to use Panel.Controls.Clear() but this didn't work for me. What I did was check that the Panel had > 1 control, then did Panel.Controls.RemoveAt(1) to eradicte the control and then replace it.
Second warning - some people say to do the control loading in Page_Init, rather than Page_Load. Problem with that is the Page_Init can't reference any Panels / Placeholders etc. so it's a bit of a waste of time.
POSTSCRIPT: just tried loading something into Page_Init in the .ascx control rather than the Page_Load event to stop the value of a control changing every time and it worked. I wonder if it's because I had the Page_Init set to Private before, and now I have changed it to Protected it seems to work ok. My bad.
Identified a problem yesterday whereby the second and subsequent scripts that got loaded had a strange little error - the first server control in the script didn't work on its first event (i.e. click, selectedindexchanged). It would work the second time the control was used - but that's no good for the users.
So I set about trying to solve it. After about seven hours of reading, trying things out and getting really frustrated I did solve it thanks to a couple of articles on the web. I made a load of mistakes so hopefully someone will read this and avoid making the same ones.
First, I had the scripts declared globally, i.e. Dim scriptname As Global.namespace.scriptname. Wrong wrong wrong!
What you need to do is create a ViewState called LastLoadedControl and set this to be the physical path of the control. This will reside in memory and allow you to persist the control between postbacks of the parent form. Then create a Private Sub called LoadUserControl(). This should check to make sure that the LastLoadedControl isn't null, then load the control and attach it to the Panel / Placeholder that you're using to display the controls.
You need to call this LoadUserControl method in the Page_Load event every time. Also, in the method that you're using to drive the user interaction (i.e. the list of scripts to pick from) you need to declare the physical path of the script, set the LastLoadedControl ViewState to this value, then call the LoadUserControl method.
Check out the first example in this blog (it's in C# but easily converted) for a really good example: .
A couple of warnings. First, even though the example above solved my problem, it was causing an error whereby the aspx page was trying to load the same control into the Panel / Placeholder twice and falling over as a result. What you have to do in the LoadUserControl method is specifically set the name of the control otherwise the ViewState of the control (i.e. its values) won't carry across between postbacks. Obviously you can see a scenario where you're creating the same control twice. To get around this you have to remove the control from the Panel / Placeholder every time. The example says to use Panel.Controls.Clear() but this didn't work for me. What I did was check that the Panel had > 1 control, then did Panel.Controls.RemoveAt(1) to eradicte the control and then replace it.
Second warning - some people say to do the control loading in Page_Init, rather than Page_Load. Problem with that is the Page_Init can't reference any Panels / Placeholders etc. so it's a bit of a waste of time.
POSTSCRIPT: just tried loading something into Page_Init in the .ascx control rather than the Page_Load event to stop the value of a control changing every time and it worked. I wonder if it's because I had the Page_Init set to Private before, and now I have changed it to Protected it seems to work ok. My bad.
Tuesday, 26 October 2010
Cross - Domain Scripting and iFrames
I've been doing some work over the last couple of days to embed data from a third party API into the .NET web app front end that we're working on. The API isn't open to everyone - you have to access it via Javascript and it can only be accessed on the server that all the component parts run on. As API's go it's pretty useless when you want to hook into it from somewhere else.
So our javascript guru showed me how to make an app in an iframe talk to the parent app that's hosting the iframe and away I went. I built a .jsp page on the API server that delivers all the data I want and then modified the .aspx page to include some labels to display the data.
I ran into a couple of problems. The first concerns the iframe load time. I originally tried to write some javascript that would load the values from the divs on the .jsp page into server controls on the .aspx page but this failed because the iframe was always the last thing to load and all the server controls saw was NULL values.
The js guru suggested a kind of timer function that would check when the iframe had finished loading before running a script to get the values. But it turned out that the simplest solution was to make the .jsp page populate the values itself - you can make the content of the iframe control what gets written to the screen in the parent as much as you can do it the other (and more logical) way around. In this case the connotation of parent and child is wrong - they're more like siamese twins joined together and able to influence each others movements.
Another useful thing to know is that you can only make two apps on two different servers talk to each other if you suffix the server name in the URL with the name of the domain that they both share. So if my domain is www.markp3rry.com then you will need to set the servers to http://server1.markp3rry.com:8080/ and http://server2.markp3rry.com:8090/ (for example).
There's a heck of a lot of scuttlebut written on the web about this topic so tread carefully. A lot of the examples for extracting information from an iframe in both javascript and jquery script were making me tear my hair out. You have been warned!
So our javascript guru showed me how to make an app in an iframe talk to the parent app that's hosting the iframe and away I went. I built a .jsp page on the API server that delivers all the data I want and then modified the .aspx page to include some labels to display the data.
I ran into a couple of problems. The first concerns the iframe load time. I originally tried to write some javascript that would load the values from the divs on the .jsp page into server controls on the .aspx page but this failed because the iframe was always the last thing to load and all the server controls saw was NULL values.
The js guru suggested a kind of timer function that would check when the iframe had finished loading before running a script to get the values. But it turned out that the simplest solution was to make the .jsp page populate the values itself - you can make the content of the iframe control what gets written to the screen in the parent as much as you can do it the other (and more logical) way around. In this case the connotation of parent and child is wrong - they're more like siamese twins joined together and able to influence each others movements.
Another useful thing to know is that you can only make two apps on two different servers talk to each other if you suffix the server name in the URL with the name of the domain that they both share. So if my domain is www.markp3rry.com then you will need to set the servers to http://server1.markp3rry.com:8080/ and http://server2.markp3rry.com:8090/ (for example).
There's a heck of a lot of scuttlebut written on the web about this topic so tread carefully. A lot of the examples for extracting information from an iframe in both javascript and jquery script were making me tear my hair out. You have been warned!
Labels:
Cross Domain Scripting,
iFrame,
Javascript,
jQuery
Monday, 13 September 2010
The Pomodoro Technique
http://www.pomodorotechnique.com
"A way to get the most out of time management". Charlie Brooker mentioned this on the Guardian website (in amongst a rant against Google's crazy new instant search) and I have found it a very useful way of getting some work done. I've been struggling recently with my concentration levels; in a hot, noisy office where people are talking to you it can be difficult to focus on the task in hand - especially when said task is repetitive and / or dull.
Pomodoro basically says "get a stopwatch, work for 25 mins then grab a 5 minute break and repeat". That's not all it says and I urge you to read the website for more information. I think I remember reading somewhere else that the maximum concentration span of an undergraduate in a lecture is 25 minutes which makes the 50 minute lectures I used to attend 100% too long and this technique backs that up. It's been useful for me in breaking the day down into smaller slots with a beginning (and, more importantly, an end) and has helped me to work through what I have needed to do today. Maybe it will work for you too?
"A way to get the most out of time management". Charlie Brooker mentioned this on the Guardian website (in amongst a rant against Google's crazy new instant search) and I have found it a very useful way of getting some work done. I've been struggling recently with my concentration levels; in a hot, noisy office where people are talking to you it can be difficult to focus on the task in hand - especially when said task is repetitive and / or dull.
Pomodoro basically says "get a stopwatch, work for 25 mins then grab a 5 minute break and repeat". That's not all it says and I urge you to read the website for more information. I think I remember reading somewhere else that the maximum concentration span of an undergraduate in a lecture is 25 minutes which makes the 50 minute lectures I used to attend 100% too long and this technique backs that up. It's been useful for me in breaking the day down into smaller slots with a beginning (and, more importantly, an end) and has helped me to work through what I have needed to do today. Maybe it will work for you too?
Wednesday, 8 September 2010
Creating a JSON - enabled .NET Web Service
Assuming that you're writing VB.NET. The blogging app won't let me put less than / greater than tags in so make sure you add these.
1. Create a new ASP.NET Web Service Application project in Visual Studio 2008 / 2010. Call it 'JSONWebService'.
2. Rename the default 'Service1.asmx' to 'json.asmx'. Also rename the code behind class from 'Service1' to 'json' and change the binding on the front .asmx page.
3. There should be a commented line in the template code that the project creates that looks like "system.web.script.services.scriptservice()_" . Uncomment this.
4. Add a new Public Class to your code behind. The example I am going to use will provide data on staff contact information (i.e. phone number, email address) but you can fit in whatever data you want. So I will do Public Class StaffInformation. Create four public string variables within the Class; Public forename As String, Public surname As String, Public phonenumber As String, Public email As String.
5. Add a new Web Method to the json Class. Call it GetEmployees so you'll need to do Public Function GetEmployees(ByVal id As String) As List(Of StaffInformation). Make sure you add the WebMethod() line of code above the Function declaration - you can copy / paste that from the HelloWorld method that the project template creates.
6. You'll also need to add this line inside tags below the WebMethod declaration: System.Web.Script.Services.ScriptMethod(UseHttpGet:=False, ResponseFormat:=Script.Services.ResponseFormat.Json)
7. Couple of important points here. The WebMethod is returning a List - that's pretty vital to the JSON output because from what I have read .NET struggles to convert more complicated objects such as DataTables. I had never used this before until I sorted this example out and it's a useful look into how OO works in VB.NET. The ScriptMethod in point 6. is also important because a) it tells the web service to return data in JSON notation, not XML and b) it forces the AJAX call to get made using HTTP POST and not GET. POST is essential when you write the AJAX call in jQuery.
8. The GetEmployees function is going to read some data from a database, put it into the List object and then return the List. Something clever then serialises the List into JSON.
9. Create SqlConnection, SqlCommand and SqlDataReader objects and set the first two up. In my example I did a simple SELECT statement against a table to return the four fields I'm looking for (as specified in the custom List).
10. Declare a new instance of the List (i.e. Dim MyList = New List(Of StaffInformation).
11. Execute the SqlDataReader. Loop through the dataset with it and do the following:
12. Dim item = New StaffInformation With {.field1 = reader(0).ToString(), .field2 = reader(1).ToString(), etc.}
13. MyList.Add(item)
14. Return MyList at the end of the Function.
15. You also need to modify the Web.Config file to allow AJAX HTTP calls. Inside the system.Web node add webServices, then protocols inside that, and then two new entries inside protocols: 'add name="HttpGet"' and 'add name="HttpPost"'.
16. That's it. Compile, publish to IIS and test with a jQuery AJAX call.
1. Create a new ASP.NET Web Service Application project in Visual Studio 2008 / 2010. Call it 'JSONWebService'.
2. Rename the default 'Service1.asmx' to 'json.asmx'. Also rename the code behind class from 'Service1' to 'json' and change the binding on the front .asmx page.
3. There should be a commented line in the template code that the project creates that looks like "system.web.script.services.scriptservice()_" . Uncomment this.
4. Add a new Public Class to your code behind. The example I am going to use will provide data on staff contact information (i.e. phone number, email address) but you can fit in whatever data you want. So I will do Public Class StaffInformation. Create four public string variables within the Class; Public forename As String, Public surname As String, Public phonenumber As String, Public email As String.
5. Add a new Web Method to the json Class. Call it GetEmployees so you'll need to do Public Function GetEmployees(ByVal id As String) As List(Of StaffInformation). Make sure you add the WebMethod() line of code above the Function declaration - you can copy / paste that from the HelloWorld method that the project template creates.
6. You'll also need to add this line inside tags below the WebMethod declaration: System.Web.Script.Services.ScriptMethod(UseHttpGet:=False, ResponseFormat:=Script.Services.ResponseFormat.Json)
7. Couple of important points here. The WebMethod is returning a List - that's pretty vital to the JSON output because from what I have read .NET struggles to convert more complicated objects such as DataTables. I had never used this before until I sorted this example out and it's a useful look into how OO works in VB.NET. The ScriptMethod in point 6. is also important because a) it tells the web service to return data in JSON notation, not XML and b) it forces the AJAX call to get made using HTTP POST and not GET. POST is essential when you write the AJAX call in jQuery.
8. The GetEmployees function is going to read some data from a database, put it into the List object and then return the List. Something clever then serialises the List into JSON.
9. Create SqlConnection, SqlCommand and SqlDataReader objects and set the first two up. In my example I did a simple SELECT statement against a table to return the four fields I'm looking for (as specified in the custom List).
10. Declare a new instance of the List (i.e. Dim MyList = New List(Of StaffInformation).
11. Execute the SqlDataReader. Loop through the dataset with it and do the following:
12. Dim item = New StaffInformation With {.field1 = reader(0).ToString(), .field2 = reader(1).ToString(), etc.}
13. MyList.Add(item)
14. Return MyList at the end of the Function.
15. You also need to modify the Web.Config file to allow AJAX HTTP calls. Inside the system.Web node add webServices, then protocols inside that, and then two new entries inside protocols: 'add name="HttpGet"' and 'add name="HttpPost"'.
16. That's it. Compile, publish to IIS and test with a jQuery AJAX call.
Friday, 3 September 2010
jQuery with ASP.NET Web Service
Had a very productive day today hooking up an HTML web page with the jQuery library to an ASP.NET web service. I've written previously about my research into this and while the WCF RESTful stuff looked good I wasn't convinced it was the best way forward to this. We've decided to use a sickness absence report as a trial for developing some skills in Javascript development and today I was looking for a better way to expose web service methods that jQuery could easily communicate with.
After several hours of searching and some dead ends I stumbled across this article http://www.dotnetcurry.com/ShowArticle.aspx?ID=320&AspxAutoDetectCookieSupport=1 that finally showed me how to do all the things I have been reading about. The standard ASP.NET web service gets a few additions to it (such as System.Web.Script.Services.ScriptService / ScriptMethod) and you can tell the individual ScriptMethod to return data as JSON. That's great because the numerous jQuery AJAX examples all deal with JSON instead of XML. The article also shows you how to return a List of data from VB.NET (something I had never done before - I have always used DataSet / DataTable) which allows you to create a custom List with a number of attributes and then add as many items to this List as you like. The list gets populated with data from the SQL result set.
You can then write what is now becoming a pretty standard jQuery AJAX function, get a result set object, loop through it with a FOR loop and write the output into an HTML table, which then gets displayed in a div on the page (that's another thing I love about jQuery; you can actually modify whole divs).
There are plenty of examples in C# on the web but this is the first complete working example I found in VB.NET. Check it out!
After several hours of searching and some dead ends I stumbled across this article http://www.dotnetcurry.com/ShowArticle.aspx?ID=320&AspxAutoDetectCookieSupport=1 that finally showed me how to do all the things I have been reading about. The standard ASP.NET web service gets a few additions to it (such as System.Web.Script.Services.ScriptService / ScriptMethod) and you can tell the individual ScriptMethod to return data as JSON. That's great because the numerous jQuery AJAX examples all deal with JSON instead of XML. The article also shows you how to return a List of data from VB.NET (something I had never done before - I have always used DataSet / DataTable) which allows you to create a custom List with a number of attributes and then add as many items to this List as you like. The list gets populated with data from the SQL result set.
You can then write what is now becoming a pretty standard jQuery AJAX function, get a result set object, loop through it with a FOR loop and write the output into an HTML table, which then gets displayed in a div on the page (that's another thing I love about jQuery; you can actually modify whole divs).
There are plenty of examples in C# on the web but this is the first complete working example I found in VB.NET. Check it out!
Tuesday, 31 August 2010
More ASP.NET AJAX UpdatePanel excitement
I gave up. The UpdatePanel wasn't really designed to do what I want it to do. I thought initially that I could get nested UpdatePanels to just refresh localised content; i.e. if I created one inside a div then by setting it to conditional update I could just make it refresh the controls inside the div. This is not the case - event nested panels refresh everything.
The Conditional update flag is OK but you can't make a control that's outside the panel refresh it. And having a timer that refreshes content is a no - no for the reason discussed above; every ten seconds everything gets refreshed which means the text box you are typing into loses focus and the modal popup with a list of options pops down again.
I have a series of scripts that I have embedded into the main .aspx page via .ascx controls. These scripts open a new app hosted on an entirely seperate server in a new IE tab. Once the user has finished with this second app I need a way to refresh the content on the main .aspx page so what I ended up doing was navigating through the control structure of the main page from the .ascx control and finding the labels etc. that I want to update. I rigged up a modal popup with a giant refresh button and when the user jumps out to the second app this modal popup gets popped and it's the first thing the user sees when they close the second app. The refresh button then brings my main page up to date.
Navigating the control structure was a bit of a nightmare; set Me.Parent.Page and then iterate through five layers of controls to find the one I want (and that's just to pick out one label, there's more work to do). But at least I found a viable solution.
The Conditional update flag is OK but you can't make a control that's outside the panel refresh it. And having a timer that refreshes content is a no - no for the reason discussed above; every ten seconds everything gets refreshed which means the text box you are typing into loses focus and the modal popup with a list of options pops down again.
I have a series of scripts that I have embedded into the main .aspx page via .ascx controls. These scripts open a new app hosted on an entirely seperate server in a new IE tab. Once the user has finished with this second app I need a way to refresh the content on the main .aspx page so what I ended up doing was navigating through the control structure of the main page from the .ascx control and finding the labels etc. that I want to update. I rigged up a modal popup with a giant refresh button and when the user jumps out to the second app this modal popup gets popped and it's the first thing the user sees when they close the second app. The refresh button then brings my main page up to date.
Navigating the control structure was a bit of a nightmare; set Me.Parent.Page and then iterate through five layers of controls to find the one I want (and that's just to pick out one label, there's more work to do). But at least I found a viable solution.
Thursday, 26 August 2010
Fun (!) with the ASP.NET AJAX UpdatePanel
Still unlucky enough to be developing with the ASP.NET AJAX UpdatePanel? Me too. You have my condolences. After spending some time getting to grips with it you may be at the stage where you realise that you could use it to make real time updates to areas of your web app by using a Timer to run an automatic refresh.
The problem I found initially was that doing this tended to take over the entire web app and refresh everything. I also ran into problems with Triggers - i.e. I couldn't get certain parts of the app to update because I needed to fire them through an Asycnhronous trigger and it wasn't working.
So this afternoon I set out to try and solve the problem. I am working on an app that will 'branch out' to another application on another server in certain circumstances. The user completes some actions in this second app and will then close the IE window / tab and return to my app. When this happens I want some things to change - mainly text and contents of grid views. Should be achievable with the UpdatePanel. Right?
I have set up a web form that inherits from a MasterPage. First thing on the page is the ToolKitScriptManager control, followed by an UpdatePanel called 'upMain' with UpdateMode = Conditional. I have a div with some content, and then underneath this an Accordion control with five different panes.
First thing is a note on the timer. Initially I had a set of labels in the header div (inside on UP) and a grid in the fifth pane (inside a seperate UP) that I wanted to refresh. I tried setting a Timer on each of these and using them as ASync triggers on each UpdatePanel but that didn't work - only the first Trigger would fire for some reason. I tried creating one Timer inside the Main UP and using this as the Trigger for each sub - UP but that didn't work either - it just took over the entire app and started stopping data input half way through, for example (no good for the user experience).
The answer, it turns out, is to create the Timer inside the first sub - UP and wire up the ASync triggers for both sub-UP's to this Timer. That works OK and refreshes both sets of content. To get the second sub-UP to recognise the Timer you have to give it the Unique ID that ASP.NET assigns which is something like ctl00$main$... To get this set a breakpoint on your Timer event handler and check the value of sender when it breaks - you can interrogate the object and find the UniqueID value here.
More on this later.
The problem I found initially was that doing this tended to take over the entire web app and refresh everything. I also ran into problems with Triggers - i.e. I couldn't get certain parts of the app to update because I needed to fire them through an Asycnhronous trigger and it wasn't working.
So this afternoon I set out to try and solve the problem. I am working on an app that will 'branch out' to another application on another server in certain circumstances. The user completes some actions in this second app and will then close the IE window / tab and return to my app. When this happens I want some things to change - mainly text and contents of grid views. Should be achievable with the UpdatePanel. Right?
I have set up a web form that inherits from a MasterPage. First thing on the page is the ToolKitScriptManager control, followed by an UpdatePanel called 'upMain' with UpdateMode = Conditional. I have a div with some content, and then underneath this an Accordion control with five different panes.
First thing is a note on the timer. Initially I had a set of labels in the header div (inside on UP) and a grid in the fifth pane (inside a seperate UP) that I wanted to refresh. I tried setting a Timer on each of these and using them as ASync triggers on each UpdatePanel but that didn't work - only the first Trigger would fire for some reason. I tried creating one Timer inside the Main UP and using this as the Trigger for each sub - UP but that didn't work either - it just took over the entire app and started stopping data input half way through, for example (no good for the user experience).
The answer, it turns out, is to create the Timer inside the first sub - UP and wire up the ASync triggers for both sub-UP's to this Timer. That works OK and refreshes both sets of content. To get the second sub-UP to recognise the Timer you have to give it the Unique ID that ASP.NET assigns which is something like ctl00$main$... To get this set a breakpoint on your Timer event handler and check the value of sender when it breaks - you can interrogate the object and find the UniqueID value here.
Monday, 23 August 2010
Do I really need to ditch ASP.NET?
Something I have been thinking about for a while is trying to move away from ASP.NET as the front end for the web applications that I work on, instead using a combination of Javascript / jQuery for the client and WCF to handle the transactional and data part of things. You'll see some recent posts that look at this relationship in a bit more detail.
But do I really need to ditch ASP.NET? It's taken me 6 years to become pretty proficient with the framework (and I still learn new stuff when I'm busy with projects). Starting from scratch with jQuery would probably be another six years - and for what? I see websites like Ticketmaster which must handle a crazy amount of traffic and I don't see much there that I couldn't handle in ASP.NET. Could I replicate the way that Facebook works? Probably not. But does that really matter? I guess there comes a time in the career of many professional footballers when they realise that they're never going to play for Liverpool. Same applies to me.
But do I really need to ditch ASP.NET? It's taken me 6 years to become pretty proficient with the framework (and I still learn new stuff when I'm busy with projects). Starting from scratch with jQuery would probably be another six years - and for what? I see websites like Ticketmaster which must handle a crazy amount of traffic and I don't see much there that I couldn't handle in ASP.NET. Could I replicate the way that Facebook works? Probably not. But does that really matter? I guess there comes a time in the career of many professional footballers when they realise that they're never going to play for Liverpool. Same applies to me.
Thursday, 19 August 2010
WCF and jQuery (Part 3 The Return)
So WCF is pretty incredible. It's not new by any means and there are plenty of cleverer people out there who use it every day and get results and that's cool. But if you've never used it before and you get something working like I have this week you can suddenly see all kinds of potential.
Firstly it's good for me that Microsoft enable this to happen pretty easily. I bought a Ruby on Rails book a couple of months ago and started going through the examples and was blown away by how you can get a simple RESTful application running in a few minutes. Well, now I can do that with the Microsoft platform too which means I can still code in VB.NET (great) and still use Visual Studio which I think is the best development environment on the market. Tried Aptana. Tried Eclipse. Tried Notepad. No competition.
Second it makes life easier for me as a developer because I can publish a service with access to all the data and let someone else decide how they want to consume the data. When writing SOAP web services you have to make a concious decision to return XML or an Object or a String or something else. If I wanted to return JSON I would have to get the data out of the database and transform it into JSON. It's not difficult but it takes time. With this WCF DataService the developer can say "I want to get the data in XML or JSON" just by flicking a switch in the code. I don't need to get involved (from what I read though JSON is what everyone wants and why not? Heck of a lot easier than parsing an XML document).
No more database objects to write and maintain. Before customers were saying "we want to see this subset data in this order", and then they would ask to see it in a different order and with something else appended. Again, not impossible but if you're working on a relatively small but important application a Business Objects license is out of the question because they're too expensive. So you have to do the reporting yourself. This RESTful stuff makes those leviathan reporting tools obsolete much like QlikView does (see my previous posts for some info on that amazing tool).
I guess that security is a concern, much more than it was before. If I stuck the service that I created in the demo onto the internet then anyone can come along and CREATE, UPDATE and DELETE records using request / response. So understanding security becomes important and I suppose a decision needs making about securing the web server (because I'm guessing you can restrict GET/POST/PUSH/DELETE there) or securing the DataService itself. Something to read up on.
My next goal is to get a meaningful Javascript app communicating with a service and doing some funky stuff, reading data into a grid, creating records, updating records, deleting records. The Javascript stuff is going to be much more difficult because I have never used it before but I do like a challenge.
Firstly it's good for me that Microsoft enable this to happen pretty easily. I bought a Ruby on Rails book a couple of months ago and started going through the examples and was blown away by how you can get a simple RESTful application running in a few minutes. Well, now I can do that with the Microsoft platform too which means I can still code in VB.NET (great) and still use Visual Studio which I think is the best development environment on the market. Tried Aptana. Tried Eclipse. Tried Notepad. No competition.
Second it makes life easier for me as a developer because I can publish a service with access to all the data and let someone else decide how they want to consume the data. When writing SOAP web services you have to make a concious decision to return XML or an Object or a String or something else. If I wanted to return JSON I would have to get the data out of the database and transform it into JSON. It's not difficult but it takes time. With this WCF DataService the developer can say "I want to get the data in XML or JSON" just by flicking a switch in the code. I don't need to get involved (from what I read though JSON is what everyone wants and why not? Heck of a lot easier than parsing an XML document).
No more database objects to write and maintain. Before customers were saying "we want to see this subset data in this order", and then they would ask to see it in a different order and with something else appended. Again, not impossible but if you're working on a relatively small but important application a Business Objects license is out of the question because they're too expensive. So you have to do the reporting yourself. This RESTful stuff makes those leviathan reporting tools obsolete much like QlikView does (see my previous posts for some info on that amazing tool).
I guess that security is a concern, much more than it was before. If I stuck the service that I created in the demo onto the internet then anyone can come along and CREATE, UPDATE and DELETE records using request / response. So understanding security becomes important and I suppose a decision needs making about securing the web server (because I'm guessing you can restrict GET/POST/PUSH/DELETE there) or securing the DataService itself. Something to read up on.
My next goal is to get a meaningful Javascript app communicating with a service and doing some funky stuff, reading data into a grid, creating records, updating records, deleting records. The Javascript stuff is going to be much more difficult because I have never used it before but I do like a challenge.
WCF DataService and jQuery (Part 2 The Revenge)
Continuing from where I left off yesterday...
The WCF DataService is up and running and accessible via IIS on http://localhost. Now I can write a simple web page that uses jQuery to talk to the service and read some data. There are loads of other blog posts showing you how to do this; I based my solution on something that Shawn Wildermuth posted here: http://wildermuth.com/2010/02/23/WCF_Data_Services_and_jQuery.
Unfortunately I can't post the HTML because Blogger won't allow it and I haven't got time to set up my own domain with some blogging software that's more suited to a developer. But you can use the example I linked to above and figure it out. It's worth noting that Microsoft typically complicates things by adding a wrapper around the JSON that you get back so you have to do results.d.FieldName to get to the data.
What gets even crazier is that if I switch the GET to POST and add a data field to the $.ajax function (something like data: '{"CategoryName":"Test Category"}') then I can create a new record in the database without having to write a stored procedure or a SQL statement or anything. Pretty incredible.
The WCF DataService is up and running and accessible via IIS on http://localhost. Now I can write a simple web page that uses jQuery to talk to the service and read some data. There are loads of other blog posts showing you how to do this; I based my solution on something that Shawn Wildermuth posted here: http://wildermuth.com/2010/02/23/WCF_Data_Services_and_jQuery.
Unfortunately I can't post the HTML because Blogger won't allow it and I haven't got time to set up my own domain with some blogging software that's more suited to a developer. But you can use the example I linked to above and figure it out. It's worth noting that Microsoft typically complicates things by adding a wrapper around the JSON that you get back so you have to do results.d.FieldName to get to the data.
What gets even crazier is that if I switch the GET to POST and add a data field to the $.ajax function (something like data: '{"CategoryName":"Test Category"}') then I can create a new record in the database without having to write a stored procedure or a SQL statement or anything. Pretty incredible.
Wednesday, 18 August 2010
WCF DataService and jQuery
I have spent the last couple of days looking at WCF DataService and integrating it with jQuery. It's something I have been thinking about for a good few months now. We have some good reasons for wanting to do this; jQuery offers much more web development power than ASP.NET, RESTful web services are the way of the future and it's good to keep on top of what the private sector is already doing.
One of the frustrating things I find when undertaking a learning exercise like this one is how many dead ends you have to walk down before you arrive at the answer. I have spent two days Googling about a million different phrases and reading endless blog and forum posts about configuration, design and debugging. Even getting a simple RESTful web service via WCF up and running was a challenge but I finally cracked it and now I have a very simple web page making an asynchronous AJAX call to an IIS hosted WCF Data Service (which sits on top of the Northwind database), obtaining data in JSON and parsing through it. I had never done anything like this before so it has been an interesting (and at times frustrating) two days.
Here's a very quick walkthrough.
Fire up Visual Studio 2008 and create a new WCF Service Application (you might need to get the templates, or you might be lucky enough to have Visual Studio 2010 which has them pre-installed). Call it NorthwindService. I'm using VB.NET. Delete the IService1.vb and Service1.svc files that get created by default. Right click on the project and Add New Item. Select an ADO.NET Entity Data Model and call it NorthwindDataModel. This fires up a wizard so choose 'Generate from database' and create a connection to the Northwind database on your local machine or network. It's a good idea to create a SQL user on Northwind (i.e. northwind/northwind) and use this from the go rather than relying on integrated security - makes life easier when you deploy to IIS. If you choose to do this include the sensitive data in the connection string. Put ticks in all three boxes and hit Finish.
You'll get a nice ER diagram of Northwind which you can close. Add another new item, this time an ADO.NET Data Service. Call this NorthwindDataService. You get a Class file up. Make a few modifications. First, replace [[class name]] with NorthwindEntities. Second add config.UseVerboseErrors=true to the InitializeService sub (helps with debugging). Then uncomment the last two commented lines. Replace the first parameter in each method call with * and make sure they're both set to All rather than AllRead in the second parameter. Do a build and you should be good to go.
Let's test it out. Hit F5 and you should get IE up with a load of XML. You should recognise the tables from the Northwind database. Add /Categories after .svc and you will get just the Categories info - this is REST working in all its genius glory.
Publish to IIS next. Right click, publish. Target should be c:\inetpub\wwwroot\NorthwindService (remember that VS needs to be in Admin mode). Then open up IIS (which you installed, right, when you built your laptop) and create a new application on your default web site. Set the alias to NorthwindService and the path to where you just published to. You should now be able to browse to http://localhost/NorthwindService/NorthwindDataService.svc/. Try adding Categories after the final / in the URL and make sure you get the Categories data back.
How do I make jQuery talk to this service? That's dead easy but I will continue this tomorrow.
One of the frustrating things I find when undertaking a learning exercise like this one is how many dead ends you have to walk down before you arrive at the answer. I have spent two days Googling about a million different phrases and reading endless blog and forum posts about configuration, design and debugging. Even getting a simple RESTful web service via WCF up and running was a challenge but I finally cracked it and now I have a very simple web page making an asynchronous AJAX call to an IIS hosted WCF Data Service (which sits on top of the Northwind database), obtaining data in JSON and parsing through it. I had never done anything like this before so it has been an interesting (and at times frustrating) two days.
Here's a very quick walkthrough.
Fire up Visual Studio 2008 and create a new WCF Service Application (you might need to get the templates, or you might be lucky enough to have Visual Studio 2010 which has them pre-installed). Call it NorthwindService. I'm using VB.NET. Delete the IService1.vb and Service1.svc files that get created by default. Right click on the project and Add New Item. Select an ADO.NET Entity Data Model and call it NorthwindDataModel. This fires up a wizard so choose 'Generate from database' and create a connection to the Northwind database on your local machine or network. It's a good idea to create a SQL user on Northwind (i.e. northwind/northwind) and use this from the go rather than relying on integrated security - makes life easier when you deploy to IIS. If you choose to do this include the sensitive data in the connection string. Put ticks in all three boxes and hit Finish.
You'll get a nice ER diagram of Northwind which you can close. Add another new item, this time an ADO.NET Data Service. Call this NorthwindDataService. You get a Class file up. Make a few modifications. First, replace [[class name]] with NorthwindEntities. Second add config.UseVerboseErrors=true to the InitializeService sub (helps with debugging). Then uncomment the last two commented lines. Replace the first parameter in each method call with * and make sure they're both set to All rather than AllRead in the second parameter. Do a build and you should be good to go.
Let's test it out. Hit F5 and you should get IE up with a load of XML. You should recognise the tables from the Northwind database. Add /Categories after .svc and you will get just the Categories info - this is REST working in all its genius glory.
Publish to IIS next. Right click, publish. Target should be c:\inetpub\wwwroot\NorthwindService (remember that VS needs to be in Admin mode). Then open up IIS (which you installed, right, when you built your laptop) and create a new application on your default web site. Set the alias to NorthwindService and the path to where you just published to. You should now be able to browse to http://localhost/NorthwindService/NorthwindDataService.svc/. Try adding Categories after the final / in the URL and make sure you get the Categories data back.
How do I make jQuery talk to this service? That's dead easy but I will continue this tomorrow.
Friday, 4 June 2010
Percentage completion in a script decision tree
Came across an interesting problem today. We're working on a series of scripts for customer services people to work through different processes over the phone or face to face with the public. A useful indicator for the user is to see how far through the script they are. I've worked out how to get a nifty Javascript progress indicator working (i'll post the code and the inspiration in another entry) but the real question is how to work out the progress at each point in the tree.
The trees that we're working with don't have a fixed size; there is no minimum or maximum branch length. It's perfectly possible for one branch to be four nodes in length and another twelve nodes. We've thought about using the non decision nodes (ie where something gets input like a name or bank account number rather than the user deciding on something) but there aren't that many of those. Certainly not enough to stop a 0% or 100% scenario.
I've had a quick crawl of the web but can't find anything. I think the right way to go might be a 'worst case' approach whereby the script looks at the total possible number of nodes left and uses this as a basis to work out a percentage. This should be easy enough to implement with an xml parser.
The trees that we're working with don't have a fixed size; there is no minimum or maximum branch length. It's perfectly possible for one branch to be four nodes in length and another twelve nodes. We've thought about using the non decision nodes (ie where something gets input like a name or bank account number rather than the user deciding on something) but there aren't that many of those. Certainly not enough to stop a 0% or 100% scenario.
I've had a quick crawl of the web but can't find anything. I think the right way to go might be a 'worst case' approach whereby the script looks at the total possible number of nodes left and uses this as a basis to work out a percentage. This should be easy enough to implement with an xml parser.
Tuesday, 1 June 2010
HTC Desire is iPhone killer
I was lucky enough to get a new HTC Desire on Friday. After suffering from iPhone envy for the last couple of years I can safely say my HTC kicks it all over the shop. The Android OS is totally customisable and expandable by anyone; it comes with free satnav via the Google api; there are loads of funky apps and plenty of options when it comes to mail, sms and the web. This is what I have wanted for the last five years and now it is here.
Subscribe to:
Posts (Atom)