Higgs Hunters Talk

Too many possible ways to interpret certain "toughies": We need tough examples dissected in a tutorial video.

  • mboschmd by mboschmd

    I've only done 54 classifications, and already I've encountered several complex situations where multiple interpretations are possible. What I (and probably others) need is some guidance on how to handle the tough ones....I noticed you skipped the "toughies" in the tutorial. Now that you've got some 140,000 or so interpretations already in just 2 days, I wonder about the quality of some of my own contributions, and those of others, when you've skipped the "toughies" in your very short tutorial.

    Until you do a video where multiple tough examples are dissected, I feel like I'm wasting my time making guesses on the "toughies". Guessing in science is almost useless...and sometimes, worse than useless when it takes you in the wrong direction. Bad data = Bad science. Guessing is destructive to the logical faculties of the mind.

    Also, I cannot imagine CERN is going to give us any kind of credit, since 99.999% of the work is the interpretation of the off-center decays and is done by the fine, hard working scientists at CERN. To me, Higgs Hunter just looks like a slave labor camp.

    Posted

  • DZM by DZM admin

    Hi @miltonbosch, we've received a lot of feedback about wanting more diagrammed examples, and we can certainly mention that to the team. Thanks! 😃 There are some examples in the field guide underneath the classification interface... have you looked through those? They can be helpful, too.

    I do want to mention, though, that while in many scientific contexts guessing may be bad, educated guessing is actually not destructive in citizen science! The reason is that many people see each image/object, and when we reduce the data from several people's guesses, we astonishingly very often find that, as a group, they have arrived at the right answer! It's one of the miracles of citizen science.

    This is the reason why, in our animal-photo projects like Snapshot Serengeti, we don't allow people to mark an animal as "unknown;" they have to give their best guess. More often than you (or they) would think, they produce the right answer as a group! And when they don't... when the scientists see that 20 people have guessed 20 different animals... they recognize that image as a difficult one, and flag it for expert analysis. Inaccuracy often tells the team as much about an image as accuracy, as long as everyone is trying their hardest.

    So believe me: you are not doing bad science when you make your best guess on a toughie. You're doing good science--citizen science! And you're awesome for it!

    Posted

  • HMB6EQUJ5 by HMB6EQUJ5

    DZM while I understand and agree with you I "second" miltonbosch on the video recommendation. The field guide is limited with all the variables we are facing. For my part I read the entire thread here and have a little better understanding after reading all the questions peeps have posed. Cheers!

    Posted

  • DZM by DZM admin

    I have no disagreement that it would be awesome for the team to produce some additional supplementary help of one form or another. That's traditionally been what the blog has been for, and I know that the team is prepping their first blog post. We'll see if they address some of the FAQ that have popped up here over the past few days!

    Posted

  • HMB6EQUJ5 by HMB6EQUJ5

    Thank you Darren.......this has been quite a start

    Posted

  • mboschmd by mboschmd

    DZM,

    Your explanation has been very helpful to put the "big picture" together so I can understand the value of these group projects, and even the value of mistakes.

    Posted

  • DZM by DZM admin

    Thank you very much, @miltonbosch! I'm feeling all warm and fuzzy inside now. 😃

    And yes, when I get back to the office on Monday, I'm going to connect with the team and our Oxford group and see what we want to do as far as additional tutorializing, additional digrammed examples, blog posts, etc. Thanks for the suggestions!

    Posted

  • kirkrust by kirkrust

    the slice view is what's giving me a headache...
    jus sayin...

    Posted

  • kirkrust by kirkrust

    @DZM - I'm with mb, you explanation of the value of group projects helps to put it all into prospective. THANKS !

    Posted

  • LeeReiswig by LeeReiswig

    I get that you don't want to train us all to mark only the same things but by the same token you don't want us marking at random. We need to better ascertain what you're looking for. Perhaps you could post actual results of "proper" marks and "improper" marks. Maybe even unexpected results?

    Posted

  • DZM by DZM admin

    Hi @lreiswig , we've gotten feedback like this quite a bit, so I wanted to ask... Have you taken a look at the "guide" that's underneath the classification interface? Is that insufficient? I think that's why the team put that there, but I need to know if people either aren't seeing it, or are finding that it's not good enough to answer their questions.

    Thank you!! 😃

    Posted

  • rlund by rlund

    DZM, I think I've learned a few things from the TALK that should be mentioned in the guide:

    1. The guide doesn't state that the 2 initial green lines are the two initial particle tracks being smashed... some people have tried to classify them as OCV's in slice view
    2. The guide doesn't state that in slice view, the whole middle horizontal line is the "center", this has thrown many people off (including myself at first)
    3. The guide doesn't mention that the red dotted line is just a reference line, although most people have figured that out.

    Cheers, I think everything else is great!!! I often classify while I wait for code to compile 😃

    Posted

  • DZM by DZM admin

    Hi @rlund, thanks for the great feedback!

    While I don't think we're going to do any more messing with the tutorial, that is definitely going in the FAQs page here on talk, to which we're adding a prominent link to from the classification interface. We'll be sure that it states it very clearly!

    Posted

  • FalconAF by FalconAF

    I just joined the Higgs Hunters project. I was initially confused by many of the concerns expressed in this thread too. But then, after reading many posts in these forums to try to get a better understanding of what was "expected" of me as a contributor classifying images, it dawned on me how it works.

    There really are no such things as "proper" or "improper" marks when it comes to us as "citizen scientists". There aren't really any "right" or "wrong" ones either. And if they produce "unexpected results", that isn't necessarily a "bad" thing if it happens. It might actually be a "good" thing for the real scientists.

    We are, as individuals, providing "data" to the real scientists. Each individual piece of data may not be that useful to them as far as being "right" or "wrong". But when they get a large enough sample of individual data inputs, that larger database may indicate a "trend" about something. It may point them to something that would benefit (or not benefit) examining more at THEIR level of understanding.

    We make our best guesses at what we see in the images, based on our level of understanding. They compile our best guesses and decide if there might be "something there" about them that would warrant further examination at their level of understanding. When "we" are "used" this way, it can make it much more efficient to identify the "more likely - less likely" data in projects that produce HUGE amounts of data, like the LHC. Our inputs, either "right" or "wrong", can point the real scientists in the directions that might get the most "bang for the buck" at what they end up examining in that plethora of data. They are pretty smart, and I trust their ability to decipher our inputs as being "meaningful" vs "Well, I think we can disregard THAT". 😉

    As for anything we may identify as "weird", I'm pretty sure they understand we may do that because it might be "weird" to us at our level of knowledge and understanding. And they will take that into account. But if enough of us did tag something as "weird", it just MIGHT cause them to look at it more closely (if they didn't already know it really WASN'T weird), and who knows...a new discovery may be made.

    I kinda think about it like this...

    Newton was sitting under an apple tree. An apple fell down to the ground, and he noticed it happen. Might he have thought to himself, "Gee, that's weird. I wonder why that happened?"

    What if he had never thought it was "weird" to begin with? We...the "citizen scientists" here right now...may not even have any images to stare at and think, "Gee, that looks weird to me, so I guess I'll mark it 'weird" and submit it for evaluation." 😉

    Yes, I would want to have the best understanding possible of what I'm looking at so I could make the most "accurate" and "probable" data inputs concerning the images I classify. But even then, I know I'm gonna be "wrong" about them sometimes. The way we are presented the images...only being able to see one of them at a time (Normal, Zoom, or Slice) practically ensures that some of our initial guesses will be "wrong". But doing it that way is actually beneficial to the real scientists. It will help them identify "trends" when they compare all the inputs from all 3 of the possible views.

    So press on. Make your "best guesses". No harm, no foul if you are wrong sometimes. They expect that to happen. It actually makes their analysis of our individual data submissions easier to identify as meaningful or not in the bigger picture of things.

    Posted

  • ElisabethB by ElisabethB moderator in response to FalconAF's comment.

    That's the spirit ! @FalconAF !

    Happy hunting ! 😄

    Posted

  • DZM by DZM admin in response to FalconAF's comment.

    This is beautiful. 😃 You understand it perfectly. THANK YOU!!

    Posted