Own implementation of Lazy object











up vote
14
down vote

favorite
2












The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}









share|improve this question
























  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    yesterday










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    yesterday






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    12 hours ago















up vote
14
down vote

favorite
2












The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}









share|improve this question
























  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    yesterday










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    yesterday






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    12 hours ago













up vote
14
down vote

favorite
2









up vote
14
down vote

favorite
2






2





The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}









share|improve this question















The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}






c# generics lazy






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 12 hours ago









Heslacher

44.5k460154




44.5k460154










asked yesterday









Dirk Boer

308210




308210












  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    yesterday










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    yesterday






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    12 hours ago


















  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    yesterday










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    yesterday






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    12 hours ago
















@AdrianoRepetti this would make a great answer ;-)
– t3chb0t
yesterday




@AdrianoRepetti this would make a great answer ;-)
– t3chb0t
yesterday












@AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
– Dirk Boer
yesterday




@AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
– Dirk Boer
yesterday




1




1




Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
– Vogel612
12 hours ago




Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
– Vogel612
12 hours ago










8 Answers
8






active

oldest

votes

















up vote
30
down vote



accepted











is the current implementation 'safe'?




Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




  • We're on thread A.

  • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

  • We switch to thread B.

  • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

  • We switch back to thread A.


  • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


Thread A is not required to invalidate the cache because thread A never takes a lock.!



Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




are there any important performance considerations vs the original one?




Only you can answer that question. Answer performance questions by gathering real-world empirical data.



However, I can say a few things about this problem in general.



The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




is there anything else I'm missing that I should be concerned about in a high traffic application?




I don't know how to answer that question.






share|improve this answer



















  • 8




    mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
    – t3chb0t
    yesterday








  • 6




    @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
    – Eric Lippert
    yesterday






  • 11




    @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
    – Eric Lippert
    16 hours ago






  • 3




    The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
    – benj2240
    8 hours ago






  • 3




    @benj2240: Thanks for pointing that out. Indeed, I very specifically did not answer the question "how do I fix the correctness problem?" because I am not competent to fix it. I mean, I could guess, and those would be some pretty informed guesses, but you want to reason about threading from a position of certainty, not hope! (And of course I did not answer the performance question because that's not an answerable question; performance is a process of identifying and meeting the needs of stakeholders, and we don't know who those are.)
    – Eric Lippert
    8 hours ago


















up vote
13
down vote














is there anything else I'm missing that I should be concerned about in a high traffic application?




Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






share|improve this answer





















  • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
    – Dirk Boer
    yesterday












  • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
    – Dirk Boer
    yesterday










  • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
    – phoog
    yesterday








  • 1




    @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
    – jpmc26
    yesterday






  • 1




    @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
    – jpmc26
    yesterday




















up vote
7
down vote













I like the idea, but you should carefully explain how this works in comments.



Try this:



  MyLazy myLazy = new MyLazy();

int value1 = myLazy.Get(() => 42);
Console.WriteLine(value1);

int value2 = myLazy.Get(() => 65);
Console.WriteLine(value2);


It correctly prints out:



42
42


But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






share|improve this answer



















  • 3




    The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
    – phoog
    yesterday






  • 1




    @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
    – firda
    yesterday








  • 2




    @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
    – phoog
    yesterday




















up vote
7
down vote














Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



If I understand correctly, your problem is that your classes look something like this:



public class MyClass
{
private Lazy<string> _MyStringValue;
// ...

public MyClass()
{
this._MyStringValue = new Lazy<string>(() => {
var builder = new StringBuilder();
builder.Append("a");
// 50 more lines of expensive construction
return builder.ToString();
});

// 100 more lines constructing OTHER lazy stuff
}
}


Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



I think there are two things you can do to alleviate this problem:





  1. Parameterize



    Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



    public class MyClass
    {
    private Lazy<string> _MyStringValue;
    // ...

    public MyClass(Lazy<string> myStringValue)
    {
    this._MyStringValue = myStringValue;
    }
    }



  2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



    class MyStringValueMaker
    {
    // Could be an instance method if that's more appropriate.
    // This is just for example
    public static string MakeValue()
    {
    var builder = new StringBuilder();
    builder.Append("a");
    // 50 more lines of expensive construction
    return builder.ToString();
    }
    }


    And then elsewhere:



    var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



Now suddenly everything is much better organized, more reusable, and simpler to understand.



If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






share|improve this answer






























    up vote
    5
    down vote













    Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



    public class MyLazy {
    private object lazy;
    private object _Lock = new object();

    public T Get<T>(Func<T> factory) {
    if (lazy == null) {
    lock (_Lock) {
    if (lazy == null) {
    lazy = new Lazy<T>(factory);
    }
    }
    }
    return ((Lazy<T>)lazy).Value;
    }
    }


    Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






    share|improve this answer



















    • 10




      I heard you like Lazy so I put some Lazy in your Lazy... ;)
      – Pieter Witvoet
      yesterday








    • 1




      If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
      – t3chb0t
      yesterday












    • @t3chb0t good point
      – Nkosi
      yesterday






    • 8




      A static cache without an eviction policy is called a memory leak.
      – Johnbot
      yesterday










    • @Johnbot true. I'll revert that suggestion out for the time being.
      – Nkosi
      yesterday


















    up vote
    5
    down vote














    is there anything else I'm missing that I should be concerned about in a high traffic application?




    By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






    share|improve this answer




























      up vote
      2
      down vote













      With the feedback that (my brain) could understand I've came to this for now.




      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

      • Included adding the volatile keyword for the _Loaded check

      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

      • Added a warning to remind myself there might be issues


      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



      I appreciate that everyone has a different opinion about that, that's okay.



      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



      /// <summary>
      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
      /// </summary>
      public class MyLazy<T>
      {
      private T _Value;
      private volatile bool _Loaded;
      private object _Lock = new object();


      public T Get(Func<T> create)
      {
      if ( !_Loaded )
      {
      lock (_Lock)
      {
      if ( !_Loaded ) // double checked lock
      {
      _Value = create();
      _Loaded = true;
      }
      }
      }

      return _Value;
      }


      public void Invalidate()
      {
      lock ( _Lock )
      _Loaded = false;
      }
      }





      share|improve this answer



















      • 2




        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
        – t3chb0t
        12 hours ago








      • 2




        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
        – RobH
        8 hours ago






      • 1




        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
        – Pieter Witvoet
        8 hours ago








      • 1




        @Dirk you changed the order but what Eric said is still perfectly valid, compiler may read _Value and store it into a register at the very beginning of the function (then _Loaded is false and bla bla bla). Check how Lazy<T> does it: using ONE variable, not two (and Boxed in that case is to avoid boxing for primitive types). Don't expect your implementation to be much simpler that one...
        – Adriano Repetti
        6 hours ago






      • 1




        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
        – firda
        1 hour ago




















      up vote
      1
      down vote














      is the current implementation 'safe'?




      No it isn't, because:




      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


      Lessons to learn




      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

      3. If you still wish to write your own version then at least remember:


        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


        2. volatile only guarantess the order of writes, not that other threads would see them immediately! That is why your Invalidate is incompatible with double-checked locking, it just won't be seen immediately by the threads that did not call it and may actually NEVER be seen, if they never call lock or something that ensures cache-synchronization.

        3. I am not an expert ;)








      share|improve this answer





















      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
        – firda
        6 hours ago










      • I would change "I am not an expert" to "We are not experts".
        – jpmc26
        5 hours ago










      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
        – firda
        4 hours ago











      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "196"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f207708%2fown-implementation-of-lazyt-object%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      8 Answers
      8






      active

      oldest

      votes








      8 Answers
      8






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      30
      down vote



      accepted











      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.






      share|improve this answer



















      • 8




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        yesterday








      • 6




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        yesterday






      • 11




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        16 hours ago






      • 3




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        8 hours ago






      • 3




        @benj2240: Thanks for pointing that out. Indeed, I very specifically did not answer the question "how do I fix the correctness problem?" because I am not competent to fix it. I mean, I could guess, and those would be some pretty informed guesses, but you want to reason about threading from a position of certainty, not hope! (And of course I did not answer the performance question because that's not an answerable question; performance is a process of identifying and meeting the needs of stakeholders, and we don't know who those are.)
        – Eric Lippert
        8 hours ago















      up vote
      30
      down vote



      accepted











      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.






      share|improve this answer



















      • 8




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        yesterday








      • 6




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        yesterday






      • 11




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        16 hours ago






      • 3




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        8 hours ago






      • 3




        @benj2240: Thanks for pointing that out. Indeed, I very specifically did not answer the question "how do I fix the correctness problem?" because I am not competent to fix it. I mean, I could guess, and those would be some pretty informed guesses, but you want to reason about threading from a position of certainty, not hope! (And of course I did not answer the performance question because that's not an answerable question; performance is a process of identifying and meeting the needs of stakeholders, and we don't know who those are.)
        – Eric Lippert
        8 hours ago













      up vote
      30
      down vote



      accepted







      up vote
      30
      down vote



      accepted







      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.






      share|improve this answer















      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited yesterday

























      answered yesterday









      Eric Lippert

      12.6k32746




      12.6k32746








      • 8




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        yesterday








      • 6




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        yesterday






      • 11




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        16 hours ago






      • 3




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        8 hours ago






      • 3




        @benj2240: Thanks for pointing that out. Indeed, I very specifically did not answer the question "how do I fix the correctness problem?" because I am not competent to fix it. I mean, I could guess, and those would be some pretty informed guesses, but you want to reason about threading from a position of certainty, not hope! (And of course I did not answer the performance question because that's not an answerable question; performance is a process of identifying and meeting the needs of stakeholders, and we don't know who those are.)
        – Eric Lippert
        8 hours ago














      • 8




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        yesterday








      • 6




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        yesterday






      • 11




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        16 hours ago






      • 3




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        8 hours ago






      • 3




        @benj2240: Thanks for pointing that out. Indeed, I very specifically did not answer the question "how do I fix the correctness problem?" because I am not competent to fix it. I mean, I could guess, and those would be some pretty informed guesses, but you want to reason about threading from a position of certainty, not hope! (And of course I did not answer the performance question because that's not an answerable question; performance is a process of identifying and meeting the needs of stakeholders, and we don't know who those are.)
        – Eric Lippert
        8 hours ago








      8




      8




      mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
      – t3chb0t
      yesterday






      mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
      – t3chb0t
      yesterday






      6




      6




      @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
      – Eric Lippert
      yesterday




      @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
      – Eric Lippert
      yesterday




      11




      11




      @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
      – Eric Lippert
      16 hours ago




      @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
      – Eric Lippert
      16 hours ago




      3




      3




      The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
      – benj2240
      8 hours ago




      The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
      – benj2240
      8 hours ago




      3




      3




      @benj2240: Thanks for pointing that out. Indeed, I very specifically did not answer the question "how do I fix the correctness problem?" because I am not competent to fix it. I mean, I could guess, and those would be some pretty informed guesses, but you want to reason about threading from a position of certainty, not hope! (And of course I did not answer the performance question because that's not an answerable question; performance is a process of identifying and meeting the needs of stakeholders, and we don't know who those are.)
      – Eric Lippert
      8 hours ago




      @benj2240: Thanks for pointing that out. Indeed, I very specifically did not answer the question "how do I fix the correctness problem?" because I am not competent to fix it. I mean, I could guess, and those would be some pretty informed guesses, but you want to reason about threading from a position of certainty, not hope! (And of course I did not answer the performance question because that's not an answerable question; performance is a process of identifying and meeting the needs of stakeholders, and we don't know who those are.)
      – Eric Lippert
      8 hours ago












      up vote
      13
      down vote














      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






      share|improve this answer





















      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        yesterday












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        yesterday










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        yesterday








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        yesterday






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        yesterday

















      up vote
      13
      down vote














      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






      share|improve this answer





















      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        yesterday












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        yesterday










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        yesterday








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        yesterday






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        yesterday















      up vote
      13
      down vote










      up vote
      13
      down vote










      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






      share|improve this answer













      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered yesterday









      t3chb0t

      33.6k744107




      33.6k744107












      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        yesterday












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        yesterday










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        yesterday








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        yesterday






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        yesterday




















      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        yesterday












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        yesterday










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        yesterday








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        yesterday






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        yesterday


















      Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
      – Dirk Boer
      yesterday






      Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
      – Dirk Boer
      yesterday














      About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
      – Dirk Boer
      yesterday




      About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
      – Dirk Boer
      yesterday












      @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
      – phoog
      yesterday






      @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
      – phoog
      yesterday






      1




      1




      @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
      – jpmc26
      yesterday




      @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
      – jpmc26
      yesterday




      1




      1




      @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
      – jpmc26
      yesterday






      @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
      – jpmc26
      yesterday












      up vote
      7
      down vote













      I like the idea, but you should carefully explain how this works in comments.



      Try this:



        MyLazy myLazy = new MyLazy();

      int value1 = myLazy.Get(() => 42);
      Console.WriteLine(value1);

      int value2 = myLazy.Get(() => 65);
      Console.WriteLine(value2);


      It correctly prints out:



      42
      42


      But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






      share|improve this answer



















      • 3




        The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
        – phoog
        yesterday






      • 1




        @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
        – firda
        yesterday








      • 2




        @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
        – phoog
        yesterday

















      up vote
      7
      down vote













      I like the idea, but you should carefully explain how this works in comments.



      Try this:



        MyLazy myLazy = new MyLazy();

      int value1 = myLazy.Get(() => 42);
      Console.WriteLine(value1);

      int value2 = myLazy.Get(() => 65);
      Console.WriteLine(value2);


      It correctly prints out:



      42
      42


      But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






      share|improve this answer



















      • 3




        The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
        – phoog
        yesterday






      • 1




        @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
        – firda
        yesterday








      • 2




        @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
        – phoog
        yesterday















      up vote
      7
      down vote










      up vote
      7
      down vote









      I like the idea, but you should carefully explain how this works in comments.



      Try this:



        MyLazy myLazy = new MyLazy();

      int value1 = myLazy.Get(() => 42);
      Console.WriteLine(value1);

      int value2 = myLazy.Get(() => 65);
      Console.WriteLine(value2);


      It correctly prints out:



      42
      42


      But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






      share|improve this answer














      I like the idea, but you should carefully explain how this works in comments.



      Try this:



        MyLazy myLazy = new MyLazy();

      int value1 = myLazy.Get(() => 42);
      Console.WriteLine(value1);

      int value2 = myLazy.Get(() => 65);
      Console.WriteLine(value2);


      It correctly prints out:



      42
      42


      But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited yesterday

























      answered yesterday









      Henrik Hansen

      5,9381722




      5,9381722








      • 3




        The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
        – phoog
        yesterday






      • 1




        @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
        – firda
        yesterday








      • 2




        @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
        – phoog
        yesterday
















      • 3




        The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
        – phoog
        yesterday






      • 1




        @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
        – firda
        yesterday








      • 2




        @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
        – phoog
        yesterday










      3




      3




      The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
      – phoog
      yesterday




      The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
      – phoog
      yesterday




      1




      1




      @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
      – firda
      yesterday






      @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
      – firda
      yesterday






      2




      2




      @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
      – phoog
      yesterday






      @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
      – phoog
      yesterday












      up vote
      7
      down vote














      Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




      This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



      If I understand correctly, your problem is that your classes look something like this:



      public class MyClass
      {
      private Lazy<string> _MyStringValue;
      // ...

      public MyClass()
      {
      this._MyStringValue = new Lazy<string>(() => {
      var builder = new StringBuilder();
      builder.Append("a");
      // 50 more lines of expensive construction
      return builder.ToString();
      });

      // 100 more lines constructing OTHER lazy stuff
      }
      }


      Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



      I think there are two things you can do to alleviate this problem:





      1. Parameterize



        Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



        public class MyClass
        {
        private Lazy<string> _MyStringValue;
        // ...

        public MyClass(Lazy<string> myStringValue)
        {
        this._MyStringValue = myStringValue;
        }
        }



      2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



        class MyStringValueMaker
        {
        // Could be an instance method if that's more appropriate.
        // This is just for example
        public static string MakeValue()
        {
        var builder = new StringBuilder();
        builder.Append("a");
        // 50 more lines of expensive construction
        return builder.ToString();
        }
        }


        And then elsewhere:



        var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



      Now suddenly everything is much better organized, more reusable, and simpler to understand.



      If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






      share|improve this answer



























        up vote
        7
        down vote














        Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




        This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



        If I understand correctly, your problem is that your classes look something like this:



        public class MyClass
        {
        private Lazy<string> _MyStringValue;
        // ...

        public MyClass()
        {
        this._MyStringValue = new Lazy<string>(() => {
        var builder = new StringBuilder();
        builder.Append("a");
        // 50 more lines of expensive construction
        return builder.ToString();
        });

        // 100 more lines constructing OTHER lazy stuff
        }
        }


        Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



        I think there are two things you can do to alleviate this problem:





        1. Parameterize



          Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



          public class MyClass
          {
          private Lazy<string> _MyStringValue;
          // ...

          public MyClass(Lazy<string> myStringValue)
          {
          this._MyStringValue = myStringValue;
          }
          }



        2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



          class MyStringValueMaker
          {
          // Could be an instance method if that's more appropriate.
          // This is just for example
          public static string MakeValue()
          {
          var builder = new StringBuilder();
          builder.Append("a");
          // 50 more lines of expensive construction
          return builder.ToString();
          }
          }


          And then elsewhere:



          var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



        Now suddenly everything is much better organized, more reusable, and simpler to understand.



        If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






        share|improve this answer

























          up vote
          7
          down vote










          up vote
          7
          down vote










          Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




          This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



          If I understand correctly, your problem is that your classes look something like this:



          public class MyClass
          {
          private Lazy<string> _MyStringValue;
          // ...

          public MyClass()
          {
          this._MyStringValue = new Lazy<string>(() => {
          var builder = new StringBuilder();
          builder.Append("a");
          // 50 more lines of expensive construction
          return builder.ToString();
          });

          // 100 more lines constructing OTHER lazy stuff
          }
          }


          Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



          I think there are two things you can do to alleviate this problem:





          1. Parameterize



            Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



            public class MyClass
            {
            private Lazy<string> _MyStringValue;
            // ...

            public MyClass(Lazy<string> myStringValue)
            {
            this._MyStringValue = myStringValue;
            }
            }



          2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



            class MyStringValueMaker
            {
            // Could be an instance method if that's more appropriate.
            // This is just for example
            public static string MakeValue()
            {
            var builder = new StringBuilder();
            builder.Append("a");
            // 50 more lines of expensive construction
            return builder.ToString();
            }
            }


            And then elsewhere:



            var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



          Now suddenly everything is much better organized, more reusable, and simpler to understand.



          If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






          share|improve this answer















          Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




          This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



          If I understand correctly, your problem is that your classes look something like this:



          public class MyClass
          {
          private Lazy<string> _MyStringValue;
          // ...

          public MyClass()
          {
          this._MyStringValue = new Lazy<string>(() => {
          var builder = new StringBuilder();
          builder.Append("a");
          // 50 more lines of expensive construction
          return builder.ToString();
          });

          // 100 more lines constructing OTHER lazy stuff
          }
          }


          Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



          I think there are two things you can do to alleviate this problem:





          1. Parameterize



            Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



            public class MyClass
            {
            private Lazy<string> _MyStringValue;
            // ...

            public MyClass(Lazy<string> myStringValue)
            {
            this._MyStringValue = myStringValue;
            }
            }



          2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



            class MyStringValueMaker
            {
            // Could be an instance method if that's more appropriate.
            // This is just for example
            public static string MakeValue()
            {
            var builder = new StringBuilder();
            builder.Append("a");
            // 50 more lines of expensive construction
            return builder.ToString();
            }
            }


            And then elsewhere:



            var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



          Now suddenly everything is much better organized, more reusable, and simpler to understand.



          If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited yesterday

























          answered yesterday









          jpmc26

          58927




          58927






















              up vote
              5
              down vote













              Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



              public class MyLazy {
              private object lazy;
              private object _Lock = new object();

              public T Get<T>(Func<T> factory) {
              if (lazy == null) {
              lock (_Lock) {
              if (lazy == null) {
              lazy = new Lazy<T>(factory);
              }
              }
              }
              return ((Lazy<T>)lazy).Value;
              }
              }


              Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






              share|improve this answer



















              • 10




                I heard you like Lazy so I put some Lazy in your Lazy... ;)
                – Pieter Witvoet
                yesterday








              • 1




                If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                – t3chb0t
                yesterday












              • @t3chb0t good point
                – Nkosi
                yesterday






              • 8




                A static cache without an eviction policy is called a memory leak.
                – Johnbot
                yesterday










              • @Johnbot true. I'll revert that suggestion out for the time being.
                – Nkosi
                yesterday















              up vote
              5
              down vote













              Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



              public class MyLazy {
              private object lazy;
              private object _Lock = new object();

              public T Get<T>(Func<T> factory) {
              if (lazy == null) {
              lock (_Lock) {
              if (lazy == null) {
              lazy = new Lazy<T>(factory);
              }
              }
              }
              return ((Lazy<T>)lazy).Value;
              }
              }


              Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






              share|improve this answer



















              • 10




                I heard you like Lazy so I put some Lazy in your Lazy... ;)
                – Pieter Witvoet
                yesterday








              • 1




                If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                – t3chb0t
                yesterday












              • @t3chb0t good point
                – Nkosi
                yesterday






              • 8




                A static cache without an eviction policy is called a memory leak.
                – Johnbot
                yesterday










              • @Johnbot true. I'll revert that suggestion out for the time being.
                – Nkosi
                yesterday













              up vote
              5
              down vote










              up vote
              5
              down vote









              Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



              public class MyLazy {
              private object lazy;
              private object _Lock = new object();

              public T Get<T>(Func<T> factory) {
              if (lazy == null) {
              lock (_Lock) {
              if (lazy == null) {
              lazy = new Lazy<T>(factory);
              }
              }
              }
              return ((Lazy<T>)lazy).Value;
              }
              }


              Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






              share|improve this answer














              Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



              public class MyLazy {
              private object lazy;
              private object _Lock = new object();

              public T Get<T>(Func<T> factory) {
              if (lazy == null) {
              lock (_Lock) {
              if (lazy == null) {
              lazy = new Lazy<T>(factory);
              }
              }
              }
              return ((Lazy<T>)lazy).Value;
              }
              }


              Taking advantage of existing features that have been tried and tested instead of trying to roll your own.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited yesterday

























              answered yesterday









              Nkosi

              2,098619




              2,098619








              • 10




                I heard you like Lazy so I put some Lazy in your Lazy... ;)
                – Pieter Witvoet
                yesterday








              • 1




                If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                – t3chb0t
                yesterday












              • @t3chb0t good point
                – Nkosi
                yesterday






              • 8




                A static cache without an eviction policy is called a memory leak.
                – Johnbot
                yesterday










              • @Johnbot true. I'll revert that suggestion out for the time being.
                – Nkosi
                yesterday














              • 10




                I heard you like Lazy so I put some Lazy in your Lazy... ;)
                – Pieter Witvoet
                yesterday








              • 1




                If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                – t3chb0t
                yesterday












              • @t3chb0t good point
                – Nkosi
                yesterday






              • 8




                A static cache without an eviction policy is called a memory leak.
                – Johnbot
                yesterday










              • @Johnbot true. I'll revert that suggestion out for the time being.
                – Nkosi
                yesterday








              10




              10




              I heard you like Lazy so I put some Lazy in your Lazy... ;)
              – Pieter Witvoet
              yesterday






              I heard you like Lazy so I put some Lazy in your Lazy... ;)
              – Pieter Witvoet
              yesterday






              1




              1




              If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
              – t3chb0t
              yesterday






              If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
              – t3chb0t
              yesterday














              @t3chb0t good point
              – Nkosi
              yesterday




              @t3chb0t good point
              – Nkosi
              yesterday




              8




              8




              A static cache without an eviction policy is called a memory leak.
              – Johnbot
              yesterday




              A static cache without an eviction policy is called a memory leak.
              – Johnbot
              yesterday












              @Johnbot true. I'll revert that suggestion out for the time being.
              – Nkosi
              yesterday




              @Johnbot true. I'll revert that suggestion out for the time being.
              – Nkosi
              yesterday










              up vote
              5
              down vote














              is there anything else I'm missing that I should be concerned about in a high traffic application?




              By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






              share|improve this answer

























                up vote
                5
                down vote














                is there anything else I'm missing that I should be concerned about in a high traffic application?




                By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






                share|improve this answer























                  up vote
                  5
                  down vote










                  up vote
                  5
                  down vote










                  is there anything else I'm missing that I should be concerned about in a high traffic application?




                  By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






                  share|improve this answer













                  is there anything else I'm missing that I should be concerned about in a high traffic application?




                  By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered yesterday









                  phoog

                  22614




                  22614






















                      up vote
                      2
                      down vote













                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }





                      share|improve this answer



















                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        12 hours ago








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        8 hours ago






                      • 1




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        8 hours ago








                      • 1




                        @Dirk you changed the order but what Eric said is still perfectly valid, compiler may read _Value and store it into a register at the very beginning of the function (then _Loaded is false and bla bla bla). Check how Lazy<T> does it: using ONE variable, not two (and Boxed in that case is to avoid boxing for primitive types). Don't expect your implementation to be much simpler that one...
                        – Adriano Repetti
                        6 hours ago






                      • 1




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        1 hour ago

















                      up vote
                      2
                      down vote













                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }





                      share|improve this answer



















                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        12 hours ago








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        8 hours ago






                      • 1




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        8 hours ago








                      • 1




                        @Dirk you changed the order but what Eric said is still perfectly valid, compiler may read _Value and store it into a register at the very beginning of the function (then _Loaded is false and bla bla bla). Check how Lazy<T> does it: using ONE variable, not two (and Boxed in that case is to avoid boxing for primitive types). Don't expect your implementation to be much simpler that one...
                        – Adriano Repetti
                        6 hours ago






                      • 1




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        1 hour ago















                      up vote
                      2
                      down vote










                      up vote
                      2
                      down vote









                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }





                      share|improve this answer














                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }






                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited 12 hours ago

























                      answered 12 hours ago









                      Dirk Boer

                      308210




                      308210








                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        12 hours ago








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        8 hours ago






                      • 1




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        8 hours ago








                      • 1




                        @Dirk you changed the order but what Eric said is still perfectly valid, compiler may read _Value and store it into a register at the very beginning of the function (then _Loaded is false and bla bla bla). Check how Lazy<T> does it: using ONE variable, not two (and Boxed in that case is to avoid boxing for primitive types). Don't expect your implementation to be much simpler that one...
                        – Adriano Repetti
                        6 hours ago






                      • 1




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        1 hour ago
















                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        12 hours ago








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        8 hours ago






                      • 1




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        8 hours ago








                      • 1




                        @Dirk you changed the order but what Eric said is still perfectly valid, compiler may read _Value and store it into a register at the very beginning of the function (then _Loaded is false and bla bla bla). Check how Lazy<T> does it: using ONE variable, not two (and Boxed in that case is to avoid boxing for primitive types). Don't expect your implementation to be much simpler that one...
                        – Adriano Repetti
                        6 hours ago






                      • 1




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        1 hour ago










                      2




                      2




                      As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                      – t3chb0t
                      12 hours ago






                      As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                      – t3chb0t
                      12 hours ago






                      2




                      2




                      I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                      – RobH
                      8 hours ago




                      I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                      – RobH
                      8 hours ago




                      1




                      1




                      @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                      – Pieter Witvoet
                      8 hours ago






                      @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                      – Pieter Witvoet
                      8 hours ago






                      1




                      1




                      @Dirk you changed the order but what Eric said is still perfectly valid, compiler may read _Value and store it into a register at the very beginning of the function (then _Loaded is false and bla bla bla). Check how Lazy<T> does it: using ONE variable, not two (and Boxed in that case is to avoid boxing for primitive types). Don't expect your implementation to be much simpler that one...
                      – Adriano Repetti
                      6 hours ago




                      @Dirk you changed the order but what Eric said is still perfectly valid, compiler may read _Value and store it into a register at the very beginning of the function (then _Loaded is false and bla bla bla). Check how Lazy<T> does it: using ONE variable, not two (and Boxed in that case is to avoid boxing for primitive types). Don't expect your implementation to be much simpler that one...
                      – Adriano Repetti
                      6 hours ago




                      1




                      1




                      @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                      – firda
                      1 hour ago






                      @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                      – firda
                      1 hour ago












                      up vote
                      1
                      down vote














                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately! That is why your Invalidate is incompatible with double-checked locking, it just won't be seen immediately by the threads that did not call it and may actually NEVER be seen, if they never call lock or something that ensures cache-synchronization.

                        3. I am not an expert ;)








                      share|improve this answer





















                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        6 hours ago










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        5 hours ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        4 hours ago















                      up vote
                      1
                      down vote














                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately! That is why your Invalidate is incompatible with double-checked locking, it just won't be seen immediately by the threads that did not call it and may actually NEVER be seen, if they never call lock or something that ensures cache-synchronization.

                        3. I am not an expert ;)








                      share|improve this answer





















                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        6 hours ago










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        5 hours ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        4 hours ago













                      up vote
                      1
                      down vote










                      up vote
                      1
                      down vote










                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately! That is why your Invalidate is incompatible with double-checked locking, it just won't be seen immediately by the threads that did not call it and may actually NEVER be seen, if they never call lock or something that ensures cache-synchronization.

                        3. I am not an expert ;)








                      share|improve this answer













                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately! That is why your Invalidate is incompatible with double-checked locking, it just won't be seen immediately by the threads that did not call it and may actually NEVER be seen, if they never call lock or something that ensures cache-synchronization.

                        3. I am not an expert ;)









                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered 6 hours ago









                      firda

                      2,712627




                      2,712627












                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        6 hours ago










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        5 hours ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        4 hours ago


















                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        6 hours ago










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        5 hours ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        4 hours ago
















                      @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                      – firda
                      6 hours ago




                      @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                      – firda
                      6 hours ago












                      I would change "I am not an expert" to "We are not experts".
                      – jpmc26
                      5 hours ago




                      I would change "I am not an expert" to "We are not experts".
                      – jpmc26
                      5 hours ago












                      @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                      – firda
                      4 hours ago




                      @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                      – firda
                      4 hours ago


















                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f207708%2fown-implementation-of-lazyt-object%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      數位音樂下載

                      格利澤436b

                      When can things happen in Etherscan, such as the picture below?