I’ve noticed a worrying trend of late, when looking at code written by developers who are new to C#, or have never worked with the language prior to C# 3.0. I am referring to the misuse and overuse of the var keyword.

The purpose of var, for those who don’t know, is to omit the type name when declaring a local variable in situations where the type name is unknown, unavailable or doesn’t exist at the point where the code is written. The primary case where this is true is for anonymous types, whose type name is provided at compile-time. It is also used in LINQ where the result of a query cannot easily be inferred by the programmer, perhaps because it uses grouping structures, nested generic types or, indeed, anonymous types as well.

There seems to be a tendency for some programmers to use var for every variable declaration. Sure, the language doesn’t stop you from doing this and, indeed, MSDN admits that this is a “syntactic convenience”… But it also warns quite strongly that:

…the use of var does have at least the potential to make your code more difficult to understand for other developers. For that reason, the C# documentation generally uses var only when it is required.
Implicitly Typed Local Variables (C# Programming Guide), MSDN

I discovered recently that the commonly-used tool ReSharper practically mandates liberal use of var. Frankly, this isn’t helping the situation. There are some developers who try to argue the stance that var somehow improves readability and broader coding practices, such as this article:

By using var, you are forcing yourself to think more about how you name methods and variables, instead of relying on the type system to improve readability, something that is more an implementation detail…
var improves readability, Hadi Hariri

I agree with the premise of the quote above, but not with the end result. On the contrary, the overuse and misuse of var can lead to some very bad habits…

Let’s look at the argument against the widespread use of var (and for its sparing, correct use):

Implicitly-typed variables lose descriptiveness

The type name provides an extra layer of description in a local variable declaration:

// let's say we have a static method called GetContacts()
// that returns System.Data.DataTable
var individuals = GetContacts(ContactTypes.Individuals);

// how is it clear to the reader that I can do this?
return individuals.Compute("MAX(Age)", String.Empty);

My variable name above is perfectly descriptive; it differentiates between any other variables populated using GetContacts() and indeed other variables of type DataTable. When I operate on the variable, I know that it’s the individual contacts that i’m referring to, and that anything I derive from them will be of that context. However, without specifying the type name in the declaration, I lose the descriptiveness it provides…

// a more descriptive declaration
DataTable individuals = GetContacts(ContactTypes.Individuals)

When I come to revisit this body of code, i’ll know not only what the variable represents conceptually, but also its representation in terms of structure and usage; something lacking from the previous example.

‘var’ encourages Hungarian Notation

If the ommission of type names from variable declarations forces us to more carefully name our variables, it follows that variable names are more likely to describe not only their purpose, but also their type:

var dtIndividuals = GetContacts(ContactTypes.Individuals);

This is precisely the definition of Hungarian Notation, which is now heavily frowned upon as a practice, especially in type-safe languages like C#.

Specificity vs. Context

There’s no doubt that variable names must be specific, however, they need never be universally-specific. Just as a local variable in one method doesn’t need to differentiate itself from variables in other methods, a declaration that includes one explicit type need not differentiate itself from variables of a different explicit type. Implicit typing with var destroys the layer of context that type names provide, thus it forces variable names to be specific regardless of type:

// type provides context where names could be perceived as peers
Color orange = canvas.Background;
Fruit lemon = basket.GetRandom();


// this is far less obvious
var orange = canvas.Background;
var lemon = basket.GetRandom();

// you can't blame the programmer for making this mistake

Increased reliance on IntelliSense

If the type name is now absent from the declaration, and variable names are (quite rightly) unhelpful in ascertaining their type, the programmer is forced to rely on IDE features such as IntelliSense in order to determine what the type is and what methods/properties are available.

Now, don’t get me wrong, I love IntelliSense; I think it’s one of the most productivity-enhancing features an IDE can provide. It reduces typing, almost eliminates the need to keep a language reference on-hand, cuts out many errors that come from false assumptions about semantics… the list just goes on.

Unfortunately, the ultimate caveat is that IntelliSense isn’t universally available; you can write C# code without it, and in some cases I think that programmers should! Code should be easily-maintainable and debuggable in all potential coding environments, even when IntelliSense is unavailable; and implicitly-typed variables seriously hinder this objective.

No backwards compatibility

One of the advantages of an object-oriented language like C# is the potential for code re-use. You can write a component and use it in one environment (e.g. WPF, .NET 3.5), then apply it in another (e.g. ASP.NET 2.0). When authoring such components, it’s useful to be aware of the advantage of that code working across as many versions of the language and framework as possible (without impeding functionality or adding significant extra code, of course).

The practice of using var for all local variable declarations renders that code incompatible with C# 2.0 and below. If var is restricted to its intended use (i.e. LINQ, anonymous types) then only components which utilise those language features will be affected. I’ve no doubt that a lot of perfectly-operable code is being written today that will be useless in environments where an older version of the framework/language is in use. And believe me, taking type names out of code is a hell of a lot easier than putting type names back in to code.

Final Words

I sincerely hope that people will come away from this article with a better understanding of the purpose of the var keyword in C#, when to use it and, more importantly, when not to use it. As a community of developers, it’s important to encourage good practices and identify questionable ones; and I believe that the overuse of var is certainly one such questionable practice.


33 thoughts on “Misuse of the ‘var’ keyword in C#

  1. -12

    Why should I write code for obsolete enviornments that don’t have intellisense?

    I don’t publish my code, so it is only going to be read on my machine, with great intellisense. Likewise, I have no intention of supporting prior versions of the framework, and it is not clear that there is a compelling reason for me to do so.

    Your quote from MDSN is inappropriate. MSDN is documentation. It IS almost always read without intellisense, and it is read to understand details of an API rather than the flow of an algorithm. If I were writing documentation, I would avoid var.

    Using var everywhere (as I do) makes refactoring easier and allows me to focus on the meaning of my code, which is rarely dependant on the exact type of the object involved. (Alternatively, if the semantics of a named method depend on the type, then something is wrong with the design, not the use of var.)

    • -4

      The author is clearly a non var fanboy.

      The data type declarations are just noise unless you’re dealing with types like int, float, double, decimal etc.

  2. -5

    Whilst I agree ReSharper’s default suggestion of var even for ints, etc is silly (I have no problem with people electing to code that way, but it’s a silly recommendation imo) – it has a nice in-between setting to recommend var only when the type doesn’t appear on the right hand side.

    I don’t think any real criticism can be made of using var for those instances (which are most variable declarations I find). What’s the advantage in typing the same type on both sides of the assignment operator for casts/new operators?

    • +7

      I agree that the presence of a cast or new operator mitigates the readability problem, but it still doesn’t change the fact that the intended purpose of var was to provide a way to declare a variable of a compiler-generated type whose name would not be available at the point of coding. If you know the type name, why omit it? The var keyword was never meant to substantially change syntax or coding practices in C#, and yet many programmers see it as a reason to.

  3. +1

    Here Here!

    My two cents. I hate var. It shows the uter lack for intelligence and laziness of a developer. If you know what type it is declare it. If you can’t figure out what type it is then you probably shouldn’t be writing code.

    Using var makes code less readable. I look at a definition to see it’s type and to guarantee it’s type.

    Let’s look at this example:

    var i = 0;

    That declaration says nothing about the type and what I can expect from it. If you don’t know what you are doing and using large counts you will overflow it. I hope all of you that use var do overflow it and it causes a nasty bug for you. You deserve it. If you are lazy and want to shoot yourself in the foot then go ahead.

    • -1

      Sorry but there are times where the typename itself is very long. Writing the full typename every time just creates an unreadable mess. It is not laziness if you aim to create readable code!

      Using var makes imo perfect sense when you dont care about the type.

  4. +4

    I whole-heartedly agree. var is poison. It was provided solely to allow the use of anonymous types, or types which are difficult to work out (pretty much only in LINQ). I should be able to look at a code snippet and have a good idea what it does, without knowing too much context. If a code snippet is littered with var, I need to refer to several other files to work out what it does. Yes I could use intellisense, but what if I’m looking at code online, on a source server, in an open source project, on someone’s blog, etc? Using var is lazy. Lazy programmers are bad programmers. Thus, using var means you are a bad programmer. ;)

  5. -3

    Until now, I have only be using var for LINQ, etc. However, I recently saw a demo from Ander Hejlberg, the lead architect of C# where he was using it right and left. So, I’m switching over to var.

  6. +6

    Just because a popular name in programming uses it means it’s the best practice? That’s retarded. var is an overused atrocity that has led to too many bugs and pointless arguments ( like this one ). C# was boasted as a type-safe language and by using var outside its intended functionality is pretty much breaking that safety.

    When my house was built, the builders buried the waste in my front, sides, and back yard. When I talked to others about this, I found out that this is common practice. What does this mean? This means that the workers were lazy, got overpaid for their work, left a mess for someone else to clean up and reflects their professionalism. I’ll never use that builder or his workers and will tell people I will never recommend them.

    That’s how us devs are viewed at times; lazy, incompetent and overpaid. The competition is enormous out there and I wouldn’t want to be weeded out because I made someone’s life harder or viewed as lazy.

  7. +2

    For the most part I cannot fault anyone for not using var. I also don’t agree that they should be used whenever possible. However I think there is a case to be made for using them to shorting declares. For example:

    var foo = new Dictionary<string,SomeBigHonkingNameForAGenericClass>();
    is much more readable than having the variable name in the middle.

    My rule of thumb is does it increase readability?

    Do it make it easier or more difficult for the code reviews who will be reading my code without the aid of intellisense?


    var contact = contactDatabase.LookupByName(name);

    would be OK and would not be any less readability than

    Contact contact = contactDatabase.LookupByName(name);

    in both cases I know I am getting a Contact object back. Neither gives more or less information about Contact.


    var data = database.GetById(352);

    would be very poor.

    • -1

      Exactly my point.

      Using var without thinking about it is bad. When you use it to increase readability however, it can be a great thing

  8. 0

    @gordon: The best programmers are lazy programmers. They avoid repetition at all cost, and come up with re-usable composable solutions so that they don’t have to do the same work twice.

    • 0

      Yes! Avoiding repetition (what ‘some’ call lazy) is the mother of invention. Lazy programmers go out of their way (ironically) to not have to repeat and instead invent. I’d rather invent than be a typist, we aren’t writing essays, there is no word count to hit.

  9. 0

    I consider this issue to separate good programmers (and thinkers) from bad. var should be used widely because it’s a form of *decoupling* and DRY … the type of a thing should be established at its definition, with as few other mentions as possible. As for Hungarian, you just don’t understand the problem with it, which again is about *coupling*. There is nothing wrong with the original “apps” Hungarian, only with the later botched “systems”” Hungarian that encoded specific datatypes like dw rather than conceptual/functional types. In your example, both

    var dtIndividuals = GetContacts(ContactTypes.Individuals);


    DataTable individuals = GetContacts(ContactTypes.Individuals);

    encode the capabilities of “individuals”; the Hungarian version is no worse for that than the explicit type declaration.

    • +2

      But what is the point of that if we are dealing with a type-safe language/environment? I can see that being true in languages like python, but C#? Types matter in C#. It’s fine if you want to question the whole concept of type-safe, but then you are arguing languages, not C# coding styles.

  10. +3

    Let’s look at this example: var i = 0; That declaration says nothing about the type and what I can expect from it.”

    That’s because the variable doesn’t have an informative name — that’s bad programming, but is irrelevant to var.

    “I hope all of you that use var do overflow it and it causes a nasty bug for you. You deserve it.”

    That is a vile attitude and no one who utters such a thing has any credibility.

    “It was provided solely to allow the use of anonymous types”

    This simply isn’t true. While anonymous types necessitated var (and auto in C++11), type inference is a modern concept promoted by languages like Haskell and Scala, and this had a significant influence on introducing it into C# and C++.

    “Using var is lazy. Lazy programmers are bad programmers.”

    Using var is concise, and consistent with DRY. By this ridiculous argument (no less invalid by adding a smiley), only programmers who code in binary and spend weeks squeezing out every last bit are good programmers. Of course this is wrong — programmers who don’t understand why it’s good to use var (in many many cases) are bad programmers.

    “Just because a popular name in programming uses it means it’s the best practice?”

    Nothing was said about “popular”, which indeed would be irrelevant. But being “the lead architect of C#” is very relevant to best practice of that language.

    “C# was boasted as a type-safe language and by using var outside its intended functionality is pretty much breaking that safety.”

    var is completely typesafe. This statement shows a failure to understand the concept.

  11. +2

    As I’ve read in numerous places now, the original purpose of the “var” keyword was not to reduce character count, but to deal with anonymous types and later LINQ queries.

    What “var” is being used for today instead seems to be to reduce statements of this nature:

    {VariableType} {Variable} = new {VariableType}();

    To this:
    var {Variable} = new {VariableType}();

    That’s all well and good if someone wants to code that way, because C# is still a strongly typed language and you could still, given enough time or the right tools, figure out what type the variable is. But the problem is, this is not the purpose of var. A better feature (and one that I think should be added to alleviate this coding-standards-war) would be:

    {VariableType} {Variable} = new();

    Any time you use a variable, you need to know what it is, so I don’t see the value in trying to obfuscate the type. I do, however, see the value in making simpler statements than the following:

    Dictionary<List<Dictionary>,List> myComplicatedVariable = new Dictionary<List<Dictionary>,List>();

    But at some point, you have to define the type and it might as well be obvious:

    Dictionary<List<Dictionary>,List> myComplicatedVariable = new();

    • vote

      That’s an interesting syntax you’ve proposed. Unfortunately, it breaks down when polymorphism comes into the equation, e.g:

      BaseType obj;
      if (someCondition)
      obj = new DerivedType1();
      else if (someOtherCondition)
      obj = new DerivedType2();
      obj = new DerivedType3();

      I’m not sure how your syntax would apply (if at all) in a situation like the one above.

      • vote

        Isn’t that an irrelevant point considering that you can’t use var in that case either? So when the left and right hand side of an assignment is different you always have to explicitly do the type at both ends.

  12. -2

    What exactly is the problem with relying on IntelliSense? Following that argument, you’re saying that programmers should be memorizing (or wasting time digging through) every single API they use instead of having it all at their fingertips as they type. While it’s noble to know a library well enough to leverage it without IntelliSense, why not reduce the mental burden so we can focus on other tasks? The same thing goes for var. IntelliSense is one of those advances in programming that can and should be taken completely for granted; good IDEs have it, and there’s no reason they shouldn’t.

    Taking it a step further, perhaps we should edit blocks on disk by hand with a magnet, rather than writing assembly? Or push and pop registers instead of writing C? Roll our own web servers instead of using IIS or Apache? No. Unless that’s the goal of your project or your language is structured such that these concepts are necessary, there’s no reason to worry about the lower layers in most cases.

    Var is just another small evolution, another abstraction we can use to free ourselves from writing mundane boilerplate code while we focus on solving real problems instead of struggling with the environment. You shouldn’t worry about what’s going on under the covers unless you have a good reason to dig into it. Those that abhor syntactic sugar should consider whether there is really a drawback, or if it’s just that they want to keep membership in the “elite” club of programmers who “learned the hard way” by being forced by compilers to specify their types.

  13. -1

    “To this:
    var {Variable} = new {VariableType}();

    That’s all well and good if someone wants to code that way, because C# is still a strongly typed language and you could still, given enough time or the right tools, figure out what type the variable is. ”

    Huh? Just look to the right. No time or “right tools” required.

    • +2

      This is a pretty specific case. It happens fairly frequently, but what about:

      var {Variable} = _GetVariable();

      Then you are left with having jump to the _GetVariable() method declaration or using the IDE pop-ups to determine what the return type is.

      The argument for using var is that we shouldn’t care about types. We should use descriptive variable names that describe a concept, and rely on Intellisense to figure out what we can do with it. (Or if you are just reading code, to simply read the methods that are being applied to the variable.)

      That’s a nice theory or philosophy, but I don’t see it being practical in most situations I face. I find myself often having to figure out the implementation of classes and switching to the use of other classes if the implementation is wrong, inefficient, or the programmer that created it was making other assumptions about it’s use. (Yes, I know I would save typing when switching classes since I don’t have to rewrite “var”.)

      I find that knowing the type is very helpful. It allows me to use the variable/class much more efficiently than if I just “try it and see if it works”.

  14. +3

    Brad, you are correct. This is a worrying trend. It seems to be due to lazyness or unwillingness to type out the type name. Thanks for summing things up here.

  15. 0

    I might have agreed with you back when I was a hardcore Java programmer, but I’ve spent the past 1.5 years coding Python, and coming back to C#’s world of nonstop explicit typing feels so unbelievably tedious and cumbersome, especially when generics come into play. Believe it or not, it is possible to write very good programs without type annotations littered everywhere. A well written Python program may have no type annotations, yet be surprisingly clear in its type expectations. To some extent, type annotation can be a crutch that allows people to get away with poor documentation and poorly organized code. “var” softens the blow of explicit typing a good deal by giving the programmer the best of both worlds: let your function definitions do your type declarations for you, and leave them off of your variables.

  16. -3

    I do use “var” a lot in my c# applications and they work just fine and I do not think I am creating a problem for subsequent developers. One has to bear in mind that checking the type of a referenced procedure/function etc is as easy as hovering over it with the mouse to reveal it’s return type. My preference is I will use var wherever I can get away with it.

  17. +2

    Thanks for saying it or me Brad. While I can see some advantages to inferred types, I see them so frequently misused that I’d prefer to eschew them altogether rather than encourage others on my projects to overuse them.

  18. +3

    What I really love about the whole var or not to var war is when Programmers obviously misstep in their arguments…

    Like this very common one:

    var orders = GetOrders();
    foreach(var order in orders) {

    I don’t care if Customer.Orders is IEnumerable, ObservableCollection or BindingList – all I want is to keep that list in memory to iterate over it….

    Obviously the programmer here does actually care, he cares for a specific capability of the object, a capability which the interface IEnumerable provides… He also care that it’s an IEnumerable of types that ProcessOrder can handle, he might not care for the specifics.

    By using var I actually lose that statement from him, where if he had written the code as:

    IEnumerable orders = GetOrders();
    foreach(var order in orders) {

    it is explicit, and I know when I read the code that he didn’t care for anything more specific. So now I can’t go and give him a Dictionary of orders as that may break his code. And before you say “but it’s type safe, that would cause a compile time error”:

    Not if there was another method that would match.

    public void ProcessOrders()
    var orders = GetOrders();
    foreach (var order in orders)

    private void Process(IOrder order)

    private void Process(object unprocessable)
    Console.WriteLine("Can't process object of type: " + unprocessable.GetType());

    That is sort of a silly example as it stands, but you can’t rule out the possibility of hitting into the scenario it outlines… And by returning something which lives up to what “ProcessOrders” asks, you might just have caused a side-effect way down the system.

    Obviously there should be tests, but there shouldn’t be tests for things the compiler could have caught if we actually told it what we wanted rather than asking it to just figure it out on it’s own…

    If you come from a dynamic language and use that as an argument, you shouldn’t be using var, you should be using dynamic, which is a keyword I both Love and Hate… I love it when I can use it, I hate it when you hand one over to me… I have done my fair share of programming in dynamic languages as well, and also use the dynamic capabilities of C# allot, and I am loving all of it… But var has nothing to do with that…

  19. +5

    I totally agree with the article. Some comments which may be worth integrated from a reviewer perspective …
    - Code is 1 time programmed and 15 times read. Reading is more important than performance in coding. Therefore, any second lost in reading cannot be restored by a second lost typing three to five characters and then hitting tab for IntelliSense autocomplete of a type (that is the same argument of the MSDN documentation).
    - IntelliSense is not available in VS 2013 everywhere. If as a reviewer you review a change in source code control, you very often use a diffing tool.
    - Western society read from left to right. The Type in the beginning eases the things.
    - I also think that “var” can be easier to read, but just if the rest of all good practices (like naming) is maintained. Unfortunately, that rarely happens (especially from programmers with a limited experience). And guaranteed never when var is used always.

    From a theory perspective:
    - I do not believe it was the intent of the C# design to make var general purpose. Otherwise it would be in the first release. It is for anonymous types and therefore it is in the language.

    A final word: I believe, that this demon cannot be put back into Pandora’s box (Anders Hejlsberg fault). New programmers will never learn it differently, developers with different background (e.g. python) will continue with their programming style and convinced var seniors block/ignore any coding styleguide stating otherwise. I saw this social behavior in open source and industrial environment. I just have one request for all var advocates: If you are a limited experience programmer, use types. If you understand (and experienced) your decision pro var on the reviewer and peer programmer, and accept this, then use var.

    (limited experiences: that limit can be pretty high. I am 15 years programming and still learn every day)

  20. +1

    Am I missing something here? I dont see the relevance of this topic. Var itself is neither bad nor good, it is just something that you need to use in some cases and can use in others. Var does not break the concept of type safety.
    Var can increase readability of the code.

    Why should we not use var in order to not break compatibility with pre C#3 compilers? There is already a very good chance that my code won’t compile with a compiler that does not support C#3 so why should I stay away from something simple as var?

    “Var does not show the underlying type”. True if you program with notepad. Is anyone you know doing this?

    In my eyes this is just a pathetic discussion. Are we going to discuss the usage of #region or // vs. /**/ next?

  21. +3

    I agree with this article. I’ve been seeing a lot of developers overusing the var keyword. The intended use was explained very clearly in this article. The reasons I heard for using var are…write less code on declarations, decoupling code from concrete types, DRY (Don’t Repeat Yourself” principle, encourages better variable naming, and some well known architects use it a lot on demos. Except for the last reason, all the other reasons have some validity. The last reason is not valid because these were demos, how often do architects write code, and being an architect doesn’t mean they know the best practice principles of software engineering.

    All the other reasons are positives, but overusing the keyword causes more negatives. Overusing var prevents developers from easily reading the code without intellisense. The developers would need to jump from one code section to the next to figure out the type. This is the same symptom caused by the yo-yo anti-pattern. In addition, sometimes developers don’t have intellisense to read the code (code on the internet, code in a repositiory, while comparing changes in file).

    Using var to decouple the code from concrete class references is not a good enough reason. There are better more effective ways of doing that…interface development, and most of the creational patterns. Now that I’m thinking about it, if a variable declared with var is assigned to a class that implements more than one interface, then it would cause a lot of readability issues and a design issue. The example below shows how var can be very confusing when implementing many interfaces.

  22. +2

    Like many developers, I initially overused the shiny new toy called “var.” Then one day I was writing some SharePoint code in a WinForms program, and while tracking down a weird bug discovered that the compiler decided my var was referring to a WinForms control rather than a class in the SharePoint object model!! It compiled and ran without complaint, but it gave wrong results. I immediately swore off the use of var unless I have no choice, such as with anonymous types and LINQ queries. Misusing var can introduce bugs and make code less understandable. Writing the full name of a type avoids the bugs and is valuable additional documentation of what the code does. Saving a bit of typing is the only benefit of using var and is not justified, IMO.


Leave a reply


<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>