Higgs Hunters Talk

Zoom?

  • Ptd by Ptd

    Sorry if I've missed something obvious but, once I click to discuss I get access to a wonderful zoom feature, but I can't work out how to do the same in 'classify'?

    Can anyone help?

    MT

    Ptd

    Posted

  • andy.haas by andy.haas scientist

    Not supported in classify.

    Posted

  • MWPALMER by MWPALMER

    Is the same true with slice vs. normal? I can't toggle between them, so cannot determine whether curves are ocvs or merely superposed in 2d

    Posted

  • Marcuandy1981 by Marcuandy1981

    When you are classifing, you can see just one view. All three view are visible only in the talk section.

    Posted

  • MWPALMER by MWPALMER

    OK, I think I get it. While we are more likely to make mistakes when classifying if presented with less information, it is up to project scientists to wade through the mistakes we make.
    The tutorials are a bit confusing though, as they show all 3 views at once when talking about classifying, but we don't have that luxury.

    Posted

  • Marcuandy1981 by Marcuandy1981

    I started to use the magnifier from my Windows. It works! 😃

    Posted

  • maynzyuk by maynzyuk

    Some of the vertexes which are not visible in the slice view (e.g. they may just look as almost parallel lines) become visible as a vertex once looked at the frontal view. So, when I am classifying, I am bound to make a mistake (miss the vertex), even though I would have spotted it if I had all three views. I am just curious why we don't have access to more views when we classify?

    Posted

  • DZM by DZM admin

    Hey everyone!

    The lack of a zoom feature in the classification interface was a conscious decision on the part of the science team. They wanted to make sure that everyone was classifying on a level playing field... so having a zoom feature that some people might use, and others might not, would add an extra variable that could mess with the data reduction. So, really... the science team doesn't want anyone using zoom!

    As for why we only see one view, you'd have to ask the scientists... There's a board for that! 😃

    Posted

  • STFC9F22 by STFC9F22 in response to DZM's comment.

    Hi DZM,

    I don't think there is a level playing field between volunteers though - at least in my bowser (IE11) the classification image (but not the talk images) expand to fill the browser window, so if I was working on a UHD 4K display I would effectively be in zoom compared to say an HD display.

    It also seems clear from the feedback on this forum that where many lines occur in the same vicinity (particularly in many of the simulations) zoom is necessary for individual volunteers to simply maintain a similar standard of mark-up as would be applied to less cluttered images. I confess to using the Windows accessibility magnifier where necessary to count lines, identify lines having similar curvature characteristics and identify points of convergence in such circumstances, I do so consistently and so see this as no different in principle to working on a large size, high resolution, display.

    My one concern is that if a reason for not using magnification is that points of convergence, as displayed in the classification images, are actually imprecise (which I don't think is the case?) - single events might then be classified as separate events, (or lines which slightly miss the point of convergence might be excluded from the count). If this is not an issue then in my opinion use of the magnifier makes the task easier and the classification more accurate.

    Posted

  • DZM by DZM admin

    Hi @STFC9F22, it wasn't my decision--I'm not on the science team--but I really think that the team discourages anyone from using any sort of zoom while classifying, and have gone to lengths to try to avoid enabling it. I think that what they want as the same standard was that cluttered and non-cluttered images be viewed by every volunteer at the same resolution, not that they be marked with equal "accuracy." I think that it has to do with how the data is eventually reduced from multiple classifications. You'd have to ask them for more precise detail, but it really does seem like it's important to them. Perhaps they could write a blog post about it; it seems to be a very common question?

    If the classification interface is expanding on IE11, that would seem to negate that, though. Could you perhaps share a screenshot?

    Thank you!!

    Posted

  • STFC9F22 by STFC9F22 in response to DZM's comment.

    HI DZM,

    Happy to post a screenshot but not being of the media savvy generation I don't think I have the means to do so - it seems I need a shareable web location? Is it possible to email to you?

    Just to describe in a little more detail what happens: As I vary the size of the browser, the square classification image together with the 'Off-center vertex' and 'Something weird' labels and 'OK' button continually adjust to remain fitted within the full browser window. The square image varies in size whilst the labels and buttons remain fixed sizes. At full screen the image occupies the full height of the browser window, at small sizes the Main Heading Menu and a little of the Example heading beneath the image also come into view. My system is Windows 7 64 bit and the browser is IE 11.

    I'd just add that I believe my eyesight is reasonably OK but I think I would find the exercise considerably more difficult if the images were fixed at the size and resolution of the talk images.

    Posted

  • DZM by DZM admin

    That's odd... I don't know if we'd actually change anything, but I'd like to see it, at least.

    If you upload the screenshot to http://imgur.com/, it's very easy to share! Just post the link here. 😃

    Posted

  • STFC9F22 by STFC9F22 in response to DZM's comment.

    Fingers crossed, here goes

    Browser at Full Screen

    Browser size reduced

    Posted

  • davidbundy77 by davidbundy77

    What is the accuracy of the images? Sometimes tracks appear to emerge from a vertex 1 or 2 pixels away from the center. Is this far enough to be classified as "off-center", or is the resolution not good enough to able to tell?

    Posted

  • DZM by DZM admin

    From what I've heard, tracks besides the 2-3 muons (green lines) that emerge from the center are automatically deleted, so I think you can consider any vertex other than the green muons to be an off-center vertex.

    A scientist should correct me if I'm wrong, though!

    Posted

  • peterwatkins by peterwatkins scientist in response to DZM's comment.

    It is true that tracks (other than muons) that appear to come very precisely from the collision point in the normal view are not shown. However there could still be a few tracks that really are from the collision point that fail this cut and are still shown. In addition there are well known particles including charm or bottom quarks that decay slightly away from the collision point and are not the exotic particles being studied in this project. So the main focus of the ocv search is on vertices well separated from the collision point.

    Posted

  • DZM by DZM admin

    So in other words, classify them if you like, but it's no biggie if you leave them unmarked if you think they're really, really close to the center?

    Posted

  • davidbundy77 by davidbundy77

    ok, thanks

    Posted

  • MWPALMER by MWPALMER

    A multivariate analysis of user classifications would be fascinating.

    Posted

  • ElisabethB by ElisabethB moderator

    The project isn't even two weeks old ! Give the scientists a little bit of time ! 😄

    But it would be fascinating indeed ! 😄

    Posted

  • MWPALMER by MWPALMER

    Oh, I was just daydreaming about the sort of analyses I would like to do...

    Posted

  • DZM by DZM admin

    Hopefully we'll get there! 😃

    Posted