•   3 days ago

Hackathon Winners ? Evaluation thought process...

i am curios what is the thought process of evaluating the hackathon projects ,

like not only that criterias they mentioned in the description like 40% technical execution etc..

but what do they actually look for from this winner list .. can somebody tell me about the patterns you observed in those winning projects ?

not hating or criticizing but actually curious of how do these guys actually evaluate this projects ?

honestly the technical execution the presentation effort and etc take like 60% of the evaluation criteria but some winning projects seem a bit generic in UI and architecture , once again i am not hating i am just curios on where was their x factor and where i lacked in my project.

Congratultions to all the winners !!!

  • 3 comments

  •   •   2 days ago

    Bro the judging process was likely: hey gemini review this for me give me top 3 picks and I'll watch their videos
    2nd place submission has no backend code
    3rd place code is just a hardcoded demo with a prompt injection attack, I've actually posted here https://gemini3.devpost.com/forum_topics/43667-third-place-was-a-prompt-injection-attack-devpost-and-google-owe-participants-an-answer but yeah, did.l a technical review of the code and I'm disappointed, might have as well created and recorded a nice looking figma demo, faked some code for our submission and prompt injectioned it into the submission since obviousely the 40% technical execution became 60% design and video
    And that alone turned the entire hackathon from a gemini 3 hackathon into a capcut figma design competition

  • Private user

    Private user   •   about 12 hours ago

    I feel there is something fundamentally wrong with the evaluation process. I worked 12 hours a day for nearly a month to solve a critical educational challenge in India. My focus was on building a solution that can reach every child, including families with limited education, ensuring the product is simple and usable for non-technical or less-educated users.

    My project received positive feedback and recognition from several Indian educationalists, yet it seems it wasn’t even noticed during the judging phase.

    Given that Google DeepMind is involved, I expected a high standard of technical auditing. After seeing the quality of many projects that were overlooked, I genuinely believed mine should have at least been in the top 30. It is disheartening to see the current results when so much genuine engineering effort was put in.

    What is even more concerning is that this does not appear to be a rigorous evaluation at all. Even a basic AI-assisted evaluation would likely have caught inconsistencies like missing implementations, reused projects, or incorrect specifications. The current outcome suggests that neither thorough manual review nor meaningful AI evaluation was applied.

    I hope the investigation by Devpost and the Google team addresses not just the outcomes, but why high-impact, technically sound projects — especially those focused on real-world accessibility and inclusion — were overlooked.

  •   •   about 10 hours ago

    yeah actually and they probably used gemini for evalution too as the model is very weak and prone to hallucinations , easy to target with prompt injections attacks also the UIs were totally vibe coded no design system no taste, no actual faces in videos etc..

    that prompt injection thing is a very valid argument and kinda makes a lot of sense

Log in or sign up for Devpost to join the conversation.