My talk at ITCamp 2014 was about patterns for scalability for cloud application (especially Microsoft Azure applications).
Here are the slides:
My talk at ITCamp 2014 was about patterns for scalability for cloud application (especially Microsoft Azure applications).
Here are the slides:
I recently gave a talk on ‘What Is Cloud Computing’ during a gathering of the local developer community. My talk was about Microsoft Azure and its features and how they can help developers in their day-to-day activities.
Here are the slides:
And here’s the recording:
https://www.youtube.com/watch?v=W_xbmWpbG0w&index=4&list=PLnd09OEObW5rgJ-o3I14g1BAS22yl3iIJ
I recently wrote two articles in TechWiki, which were both featured in the Wiki Magazine (issue 6). You can find the articles here and here, and the magazine here.
However, due to some misconfiguration of a new Wordpress plug-in I succeeded in corrupting my database completely and my oldest backup apparently wasn’t complete. Therefore, you’ll see that my last posts have awkard dates and that some of the attached images have been lost. Sorry for that 🙁
Alex
Lab Management is a great piece of software that takes great use of virtual machines in order to create virtual labs where you, your team and your testers can test out an application in a clean environment. Lab Management integrates with Team Foundation Server 2010 and thus enables you to create the lab environements out of Visual Studio with ease.
So I started upgrading our TFS 2008 with the not-so-brand-new TFS 2010, on a completely different new machine. Besides the hassle regarding upgrading the databases, prepairing the user accounts, the shared folders, the services etc. I got to the point where I had everything working (except SharePoint Services 3 integration; will talk about that later) and was getting ready to install the Lab Management stuff.
First things are first, so I installed the Hyper-V role on my Windows Server 2008 R2 machine and afterwars System Center Virtual Machine Manager 2008 R2, because Lab Management works with SCVMM. After putting everything up and creating the SCVMM configuration to work with Hyper-V, I got the the final (and, in the end, not so final after all) point where I would configure the Lab Management in Team Foundation Server Admin Console.
So I put in the machine’s fully qualified domain name and click Test, but then suddenty a dialog box pops up requesting a user account. So I enter the user account I created for the Lab Management stuff (TFSLAB), insert the password and click Test. The credentials are fine, so I click Ok. Boom! I get this error:
TF260078: Team Foundation Server could not connect to the System Center Virtual Machine Manager Server: servername. More information for administrator: You cannot contact the Virtual Machine Manager server. The credentials provided have insufficient privileges on servername.
Ensure that your account has access to the Virtual Machine Manager server on servername, and then try the operation again.
Right. Now what? I double-check the password. Password’s fine. I double-check the username. Username’s fine. Obsiously this doesn’t have anything to do with the credentials. I check the Configuring Lab Management for th
There’s a hidden feature in Windows 8.1 that for some reason (marketing?!) didn’t get public-ish… It’s slide-to-shutdown. Basically, just like on Windows phone, with slide-to-shutdown you have the option of shutting down your PC from sliding down the lock screen.
This ‘feature’ is however available on your Windows 8.1 PC by running the slidetoshutdown.exe. You can do this directly from you Run prompt or by running slidetoshutdown.exe from a custom app you might develop for yourself (and the rest of the world ).
Alex
If you’ve ever received support from the Microsoft Azure Support team on Hosted Services/Cloud Services, you might want to know that the great team that supported you (which, by the way, in order to offer 24/7 support is deployed around the world!) has used a tool which allows them (and since last year, you too) to debug slow performance, hangs, manipulate HTTP traffic, analyze network performance, transfer files, recall a debugged machine from the load balancer, check the defined inputs on your service’s roles and so on.
Please be advised that this is actually an in-house tool developed by Microsoft and is arguably, the most handy tool to have whenever something works unexpected in your cloud environment.
The link to it is here, and this blogpost explains all the features in a little more detail.
Hope you’ll find it useful at some point.
I recently worked on a presentation seminar about Microsoft Azure related functionalities and I remembered that there’s a set of all the marketing symbols used around Azure which is extremely useful for scenarios like mine. There it is: http://www.microsoft.com/en-us/download/details.aspx?id=41937
Hopefully this helps you out.
Hi loyal readers! I’m super excited today because the Azure SQL Database team have just announced two new tiering levels!
Basically, there announced a Basic and a Standard tier, in addition to the Premium tier. Ok, you might ask yourself why I didn’t mention the Web and Business tiers. Well, because they will be retired in 12 months :(.
The new tiers won’t just change the naming conventions. They come with some additional goodies too! First of all, there’s 99.95% SLA (as soon as they will be GA). Second, there’s self-restore, a service that allows automatic restoration of your database. Based on the tiering level, you can get your data from a restore point that was done up to 24h priorly, 7 days priorly or 35 days priorly (guess which tiering level offers restoration to any point within 35 days – you’re right: Premium). Moreover, there’s a disaster recovery scenario now too: basically, you can get up to 4 readable geo-replicas created for your database. And last but especially not least is performance. If before Premium, you complaint about performance was legit, starting now you are no longer allowed to compain about SQL Database performance :). In order to express Performance, the Azure team has defined DTU, the acronym for database throughput unit. Basically, a DTU combines CPU, memory, physical reads and transaction log writes into a single unit of processing. Based on this definition, “a performance level with 5 DTUs has five times more power than a performance level with 1 DTU”. (http://msdn.microsoft.com/en-US/library/azure/dn741336.aspx)
In other words, you have the option of scaling up your database in the most transparent way: if your database no longer keeps up with the high concurrency, just scale to a double, triple etc. powered system.
Moreover, there is ASDB, which stands for Azure SQL Database Benchmark. “ASDB measures the actual throughput of a performance level by using a mix of database operations which occur most frequently in online transaction processing (OLTP) workloads”. There more information on ASDB here: http://msdn.microsoft.com/en-US/library/azure/dn741327.aspx
When it comes to db performance, the most evident performance reference is the transaction rate. On the link provided before, there’s a table on each tier’s performance level; let me give you a hint on db performance today: you can get up to 730 transactions / seconds in your database, with 800 concurrent users. Wow!
If you plan to upgrade today, make sure that your subscription has the preview feature activated. If so, you’ll have to create a new server database and copy your existing Web/Business database over to the new server via any mechanism you want. The most comfortable one probably is exporting a .bacpac of your existing database and importing it to the new server. However, please keep in mind that this small drawback is only temporary: the team plans to offer you the option of scaling from Web and Business to Basic and Standard and vice-versa without moving the database to a new server soon (don’t exactly know when, but most likely before the new tiers are generally available).
Pricing will, of course, always be a huge question. If so far your pricing was GB based on the tier you’ve chosen, database size is no longer the only unit of measurement here, since performance will matter from now on too. Therefore, it still is extremely important to optimize your queries as much as possible and only leave the scaling as a last resort. However, if your pocket is wide enough, just go ahead and creat P3 Premium databases and surf around your query waves.
Alex
A couple of days ago I’ve updated my phone to the preview version of Windows Phone 8.1 and after intensively using it, I made a list of things I love and things I hate about it. Here it goes:
Things I love about WP8.1
What I have about the Windows Phone 8.1:
I think that’s it for now. As soon as I’ll find something cool/ uncool about WP8.1, I’ll update this page.
A.
First of all, if you’re new to Application Insights, check out this link and this link too. In a word, Application Insights offers you deep insight data on your application performance and usage. And it rocks while doing that, too!
Application Insights is since last week no longer under preview. However, if you’re trying out Application Insights right now, you have obviously already found out that the corresponding NuGet package is still in beta (version 0.7.x.x) and, should you have tried out Application Insights for a longer time now, you’ve probably realized that there are a lot of changes in the configuration schema too.
One of the things I don’t like about the new schema is the lack of a special component ID (which basically defines a specific Application Insights entry = application) for debugging. This also means that when you debug, you’ll get your debug data mixed with the production data, which is bad.
However, even if the guys at Application Insights (which I’ve just met at Build 2014) have removed this one particular feature (which, I admit, didn’t work out for me as I expected), they’ve added tons of new features which are worth checking out.
Now, to the post subject. How do you collect usage and performance data if you have different cloud projects (for different environments, such as a staging and a production environment) and a single code-base (meaning, a single project containing your code). The question might be tricky, since you’ll have to add the .config file directly inside your project that contains your code (for example, the web project), rather than the cloud project.
For such a scenario, here’s what I did. I created a new folder inside the project folder (e.g. ‘AppInsightsConfigs’) which contains my applicationinsights.config files that correspond for each environment. This basically gives me the option to define a config file for the staging deplyoment (ApplicationInsights.Debug.config) and another file for the production deployment (Applicationinsights.Release.config). Obviously, each .config file has its own ComponentId and ComponentName settings.
What I do next is to create either a pre-build or post-build command that contains this simple command line:
copy “$(ProjectDir)AppInsightsConfigsApplicationInsights.$(ConfigurationName).config” “$(TargetDir)ApplicationInsights.config”
This command simply writes either the .Debug.config or .Release.config over your output directory, which works fine for me since I want to have the Debug version in a staging enviroment (for remote-debugging scenarios especially) and a Release version in the production environment (who would add the debugging symbols and loose the code optimization feature inside production?!).
One thing worth mentioning out is that if you rung this as a pre-build or post-build command, you will not get the right version of the .config file unless you exclude or change the name of the ApplicationInsights.config file the Visual Studio Application Insights extension automatically adds (or the one you’ve added manually). Moreover, if you decide to run the command as a pre-build command, you also have the option of replacing the $(TargetDir) macro with the $(ProjectDir) macro, which will copy the desired configuration file over your original ApplicationInsights.config from the root directory, so that no exclude or rename is necessary. However, in this case please keep in mind that any change you do inside your ApplicationInsights.config file will be lost the moment you run a build command. I also don’t recommend you to run the command as a post-build command with the $(ProjectDir) macro as the destination folder, because you’ll need to build you project twice for the command to work and I’m sure you’ll almost certainly forget to do so :).
A