I suggest you ...

C# and SIMD

It would be great if C# compiler and .Net JIT compiler could utilize SIMD instructions of current and future processors. The projects that require heavy calculations (MathDotNet.Numerics for example) would greatly benefit from this feature.

2,114 votes
Vote
Sign in
(thinking…)
Password icon
Signed in as (Sign out)
You have left! (?) (thinking…)
Georgii Kalnytskyi shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

49 comments

Sign in
(thinking…)
Password icon
Signed in as (Sign out)
Submitting...
  • Anonymous commented  ·   ·  Flag as inappropriate

    With the direction that the industry is taking ,this should be considered high priority IMO. NEON could ooze a lot of performance out of the low power ARM chips we are seeing now; I hope MS is serious about the whole The whole Fast and Fluid mantra they are pushing.

  • DrPizza commented  ·   ·  Flag as inappropriate

    In August, HotSpot added SIMD autovectorization of Java bytecode. HotSpot already has profiling and recompilation of hot spots (hence the name...). The Java JIT machinery was always ahead of .NET's and is leaving it ever further behind.

  • Andrew commented  ·   ·  Flag as inappropriate

    I develop games in C# .NET as its far more productive then C++ for smaller teams. I plan doing so for Windows 8 Metro as well. Auto vectorization or something like Mono.Simd would be very useful.

  • Igor Markovic commented  ·   ·  Flag as inappropriate

    I'm creating performance sensitive application which use popcount extensively.
    I need to check if cpu supports sse 4.2 and use popcount directly (or use software implementation if not available).
    Will this feature be available and when?
    Is there any time estimate or should I use unmanaged c++ instead?

  • Henrik Öjelund commented  ·   ·  Flag as inappropriate

    We develop 3d cad software and the lack of SIMD intrinsic types is one the major drawback of C# compared to C++.

  • dev commented  ·   ·  Flag as inappropriate

    In response to Carol Eidt as a game developer, I mostly work with vector2, vector3, matrix 4x4 and quaternions. In these use cases, SIMD intrinsics are the most important by far. This is one area where c# falls short of c++.

  • xoofx commented  ·   ·  Flag as inappropriate

    Along a much more improved JIT or NGen tools to generate highly optimized code, this feature is absolutely crucial. Please make float4, float2...etc HLSL like intrinsics available in C#

  • Richard Thorne commented  ·   ·  Flag as inappropriate

    Thanks for the insight Carol :) I think for the third case maybe a new application configuration value for enabling/disabling "heavier" JIT optimisations may be appropriate, that way server apps and games can enable it but lighter weight applications can have it off by default and not see any change in boot time.

  • Anonymous commented  ·   ·  Flag as inappropriate

    #2 seems to be the bridge between #1 (useful for specialized geometric math in 3D) and #3 (where the coder is unaware that his task involves vectors of arbitrary length). As #1 will be used by the math savvy and #3 by them and all others, I see no additional value in #2. However, I value any solution you choose.

  • Wilbur Southey commented  ·   ·  Flag as inappropriate

    I sincerely hope to see MS.NET SIMD support in the next release. It's encouraging to see someone @ MS is pushing for it. Thanks Carol.

  • Matt Dotson commented  ·   ·  Flag as inappropriate

    #2 would be awesome, even better if the common vector types were interfaces so 3rd parties could plug-in their types. I hate when I have to mix and match vector types from multiple vendors when I'm trying to patch together a solution. It would also allow vectors optimized for different purposes SIMD vectors vs sparse vectors, etc.

  • Azarien commented  ·   ·  Flag as inappropriate

    #1 should be the most important, the rest can wait. specifically, #3 is not very reliable, because there will always be a question whether this particular loop is being optimized or not - as we already see with regular loops and arrays.

  • Tim Gordon commented  ·   ·  Flag as inappropriate

    Carol
    It's good to see that someone is pushing for this. I'd given up hope.
    #3 please. It's not just specialist maths libraries that would benefit from better floating point optimization.

  • Anonymous commented  ·   ·  Flag as inappropriate

    @Carol

    #3 is the most useful (and requires the most to implement) since it would allow existing code to take advantage of vectorization while it makes it very easy to write code which utilize vectorization .

    #2 seems like an reasonable choice (but I definitely still like #3 to be on the roadmap) as long as it is still powerful enough for people writing math library code. Fortran has some nice support for vector arithmetic (such as c(1:nd)=a(1:nd)/b(1:nd)) witch could give some inspiration.

    #1, i would personally give this lowest priority since it would have the smallest target audience and most probably require the most knowledge from the people using it. I guess at most one of my coworkers would even consider looking at it.

    I would also suggest that you get think about if/how LINQ fits into this so that code using LINQ would still be viable for optimizations.
    I don't think that LINQ should to be the preferred way to archive vectorization (since the syntax don't fit the math domain), but it would be really nice if code using LINQ such as that below could be vectorized.

    ex LINQ:
    - var dotProduct = array1.Zip(array1, (v1,v2) => v1 + v2).Sum();
    - var add1 = array1.Select( (v1) => v1 + 1);

  • codekaizen commented  ·   ·  Flag as inappropriate

    Thanks for the great feedback, Carol. My preference would be for #3, since that would be the most useful universally throughout my code. With #1 and #2, I'd also like to have access to the floating point context so I can turn off handling denormals or error trapping for example. However, the cases where I would just want the JITter to use a vectorization heuristic (and some guidance on how to make sure the vectorization is being applied) far outnumber the cases where I want explicit support in my day to day work.

  • Carol Eidt commented  ·   ·  Flag as inappropriate

    Support for SIMD is, and has been, on the list of things that we consider for the CLR, and as a developer on the JIT with a background in optimization, this is one of my personal favorites. As you can imagine, there are many features that compete for resources in each release, so I will continue to do my part to push for this one! In fact, I break this down into three sub-features, which are complementary:
    1. SIMD “intrinsics”, along the lines of what Mono has provided. I would like to see us expose these with an abstracted size, so that developers could take advantage of future hardware with larger SIMD registers
    2. General vector types of any length, implemented on top of the SIMD intrinsics.
    3. Automatic vectorization of numeric code
    Although these are each sizable features, the first two are the most compatible with the existing .NET architecture, as they will not significantly impact JIT throughput. The last requires additional infrastructure to support more heavyweight optimizations, either through pre-compilation or dynamic recompilation. We’d love to hear from you regarding the relative importance of these three features.
    cteidt - MSFT

Feedback and Knowledge Base