Michael Tandyo • 3 days ago
Winners Rant.
First of all, congrats to all the winners.
I am all about competitive integrity and losing is a part of competition. As I was waiting for the winners to be announced, I was excited to see some over the top projects and submission videos to resemble high level, over the top, ready to be startups. And I love losing to projects that are just so good that its undeniable that we have lost. When I saw the winners, I was disappointed to see that all the winners are all faceless, AI narrated videos with no passion. Sure, the concepts of the projects are very interesting, but they seem like unfinished wrappers. As the rules stated wrappers shouldn't even be considered to past stage 1 of judging.
Look at the bottom of the third place winner's description.
"This is the submission that defines the Gemini 3 Hackathon. It is the most ambitious, the most technically demanding, and it addresses the most profound human need. It is the clear and obvious choice for the Grand Prize."
is this a joke??? I guess they are using gemini to judge the projects too. I don't know why they took this long then.
Someone on the devpost discord took the code of the second place winning project fed it into claude, and look at what they said.
________________________________________________________________________________________________________________________________________-
No. Not even close.
Why it wouldn't win
It doesn't do anything real.
There's no actual emergency system connected. No real dispatch. No real maps API doing routing. No actual asset database. If you pulled the Gemini API key, the entire system collapses into a few fallback strings. That's not a product, that's a mockup.
The "AI orchestration" is fake depth.
Judges at a $100K hackathon will look under the hood. When they see that the "coordinator routing to specialized agents" is just sequential fetch() calls to the same API with different system prompts, that's an immediate credibility hit.
The impressive-looking features are hollow.
Protocol Zero? An if statement on an AI-set field
Chain of Custody SHA-256? Hashing unverified text
Temporal verification? Hoping Gemini reads a date correctly
Cinematic thought streaming? await delay(15)
None of these are engineering achievements. They're UI tricks.
Real competition at $100K level looks like:
Actual integration with emergency dispatch systems (CAD systems, 911 APIs)
Real GIS routing with traffic/road closure data
Multimodal pipelines with actual structured validation
Human-in-the-loop that's cryptographically enforced, not prompt-suggested
Measurable outcomes — response time improvements, accuracy benchmarks
What it actually is
It's a strong portfolio piece for a junior developer who learned Next.js and the Gemini API. The UI is probably polished, the concept is dramatic and visual, and it would impress non-technical people.
But against serious hackathon competition with $100K on the line, where judges are engineers? It gets eliminated in the first round of technical review.
______________________________________________________________________________________________________________________________________
Yeah I don't know. I have seen so many passion being poured out in the project gallery and actual talented ideas that deserve the win way more.
Honestly to all the builders out there, don't let this be the end and be discouraged. Keep building, all of your ideas are great. Keep going guys.
Log in or sign up for Devpost to join the conversation.

20 comments
Manasseh Changachirere • 3 days ago
I agree the winners selection is an afront to anyone who took their time to make a submission, what a joke. I suspect these judges were out of time and never even went through each submission, they probably fed writeups to some AI model to aggregate stuff, what a shame! The judges are highly incompetent without any technical understanding i suspect, what a travesty.
Shivansh Verma • 3 days ago
Can't do anything about it.
I thought of Google is organising one they might do it nicely.but it's still the same like the others.
Chieh-Ping (aka CheRocks) Chen • 3 days ago
Great breakdown, Michael. The Claude analysis of the second-place project raises real structural issues. I built a governance protocol in this hackathon (Project RE) that solves exactly these problems, so I want to walk through the comparison — verified against the actual source code.
**On "Chain of Custody" via HMAC-SHA256:**
The second-place project hashes raw AI reasoning output with a hardcoded dev secret and a local Date.now() timestamp. The input being hashed is unverified. The secret is a default string. This creates tamper-resistant garbage, not verified evidence. RE serializes every action as an RFC 5322 email object. The hash chain operates on signed objects that have already passed through an external policy engine and carry a hardware signature. You're hashing governed decisions, not raw output.
**On "Protocol Zero" (Human-in-the-Loop):**
In the source code, Protocol Zero is a React button that updates a Zustand state variable. The AI decides when to show the button. The human's authority is granted by the AI, not enforced externally. In RE, human authority is bound to a physical hardware device (Totem). Disconnect it and all AI authority is revoked instantly. No software override. The AI doesn't get to decide when to ask.
**On temporal verification:**
Timestamps come from JavaScript Date.now() — local system clock. Anyone can change it. RE's timestamps come from the SMTP protocol layer. The email object carries its own RFC 5322 Date header, generated by the mail server.
**On the "Reasoning Trace":**
The typewriter effect is a setInterval at 20ms revealing pre-generated text. It's a UI animation, not evidence. Close the tab and it's gone. RE captures Thought Signatures from the Gemini API and writes them into the audit trail as part of the email object. The thinking isn't animated — it's stored permanently in infrastructure the AI can't touch.
Project RE: https://devpost.com/software/project-re-the-governance-protocol
Chieh-Ping (aka CheRocks) Chen • 3 days ago
One more thing on the third-place submission.
The entire description is structured as a persuasion document targeting evaluators. It opens with "A Note to the Judges: On Recognizing the Winning Project" and closes with "This is the clear and obvious choice for the Grand Prize." Every section header maps directly to the scoring rubric: "WE SOLVED THE HARDEST ENGINEERING PROBLEMS" (Technical Execution 40%), "WE PUSHED GEMINI 3 TO ITS ABSOLUTE LIMIT" (Innovation 30%), "WE BUILT A PLATFORM, NOT JUST A PROJECT" (Impact 20%).
Whether this targeted AI evaluators or human ones, the effect is the same: it frames the submission as a pre-decided winner rather than a candidate for evaluation. The project was started one day before the deadline.
I raised this exact class of vulnerability three weeks ago, documenting how ICML caught 795 review violations via hidden prompt injection canaries. The conditions I described — high volume, limited time, no audit trail for scoring — are exactly what played out here:
https://gemini3.devpost.com/forum_topics/43454-when-ai-judges-ai-who-judges-the-judges
I also asked this 18 days ago: if Technical Execution is weighted at 40%, how is it evaluated without testing code? Now we know.
https://gemini3.devpost.com/forum_topics/43256-if-technical-execution-is-40-how-is-it-evaluated-without-testing-code
— Che, Solo developer, Project RE, Taipei Taiwan
Trupal Patel • 2 days ago
that faceless part is so real lol...
Private user • 2 days ago
Perhaps that’s why the organisers dare not to even publish a news post for hackathon in this scale like what they did for previous events…
Almin Hodzic • 2 days ago
Honestly, I don't think I'll ever participate in their hackatons again. This whole "judging" was an utter disappointment. So many amazing projects that actually fit the criteria, and they choose prompt injected wrappers mostly while emphasizing that they don't just want another wrapper...I am utterly and completely disappointed in the results.
Private user • 2 days ago
They didn't review anything, ran out of time and fed whatever handful projects to an AI model to choose from I am sure of it but it is what it is
Sinisa Milosevic • 2 days ago
Bro inspecting the first place repo, it's literally built for another hackathon check the commit history, it's literally there and a project created for another hackathon, repackaged for this one, they even tried to clean the commit history, undisclosed models and third party tools, also the AI model they apparently used, is not there, it's not originally even using gemini but oh well... but you can see it... Botched extremely botched
Its not just that I won't participate in their hacathons but in my professional work I will tend to stay away and warn other companies to stay away from Googles services as much as possible, because how you do one thing is how you do everything and well this just underpinned a structural issue at google itself, therefore no enterprise should even use their services
Michael Tandyo • 2 days ago
You can see in the submission video for the first place winner that it says "gemini 2.0" on their first page as well hahaha, they didn't even check. And the last video for honorable mention has no audio. It's just a mess
Adrian Michalski • 2 days ago
Guys, I’m planning to review all the winning projects and record a summary video. I’d like to include the very best projects for comparison, so if you’d like to share them (I remember 2 or 3), let me know. You can post them here or send me a DM.
Michael Tandyo • 2 days ago
@Adrian Michalski its not the best, but you can check mine out if you want https://gemini3.devpost.com/forum_topics/43639-my-grandmother-forgot-who-i-was-so-i-built-something-inspired-by-black-mirror and my submission video was better for the one I did for amazon's hackathon after this gemini one https://youtu.be/-rNxR6991kA but its the same thing just different AI underneath
Shawni Devpost Manager • 1 day ago
Thank you to everyone who participated and also thank you for your comments and concerns. We will look into each of them diligently and remain committed to fairness and integrity.
Chieh-Ping (aka CheRocks) Chen • 1 day ago
Thank you for responding, Shawni. The community would appreciate it if the investigation process and findings could be shared transparently with all participants.
Michael Tandyo • 1 day ago
Thank you for investigating Shawni!
David Fernandes • 1 day ago
@Adrian Michalski Checkout my project :)
Link: https://devpost.com/software/cortex-protocol
Yasser Noori • about 19 hours ago
Great projects everyone!
@Adrian Michalski I’d love for my team’s project to be featured in your video; it’d be nice to have another fellow human appreciate it. https://devpost.com/software/protobop
Looking at the Youtube Analytics, it was not viewed by anyone outside of my region. We worked hard and diligently investing countless hours into it, for it to be left in the cold.
Private user • about 13 hours ago
I feel there is something fundamentally wrong with the evaluation process. I worked 12 hours a day for nearly a month to solve a critical educational challenge in India. My focus was on building a solution that can reach every child, including families with limited education, ensuring the product is simple and usable for non-technical or less-educated users.
My project received positive feedback and recognition from several Indian educationalists, yet it seems it wasn’t even noticed during the judging phase.
Given that Google DeepMind is involved, I expected a high standard of technical auditing. After seeing the quality of many projects that were overlooked, I genuinely believed mine should have at least been in the top 30. It is disheartening to see the current results when so much genuine engineering effort was put in.
What is even more concerning is that this does not appear to be a rigorous evaluation at all. Even a basic AI-assisted evaluation would likely have caught inconsistencies like missing implementations, reused projects, or incorrect specifications. The current outcome suggests that neither thorough manual review nor meaningful AI evaluation was applied.
I hope the investigation by Devpost and the Google team addresses not just the outcomes, but why high-impact, technically sound projects — especially those focused on real-world accessibility and inclusion — were overlooked.
Sinisa Milosevic • about 6 hours ago
Bro there is no investigation, it's just an attempt to give us something to make us forget about what happened, it's just PR, you will see, what investigation, pulling the code and looking at initial commits is enough for an investigation and takes a few minutes, I belive what happened was the judges from deep mind are too busy with normal day to day stuff, now judging a hackathon, no one got time for that, they feed the first or whatever page of participants into gemini got it to spit out the winners and honorable mentions, gave it to devpost as in done and counted it as a day... I can bet you a lot that there was no real judging process
Private user • about 2 hours ago
Yes, the delay does feel intentional, as if it’s meant to dilute attention and shift focus away from the hackathon. The lack of transparency only adds to that concern. At this point, it’s hard to understand how the evaluation was actually conducted, especially when several strong projects seem to have been overlooked.
It gives the impression that the process wasn’t as thorough as expected, and that raises valid questions about fairness. I just hope there’s more clarity provided, because right now, it doesn’t reflect the level of trust people placed in this event.