Page 1 of 3
One Minute Man
Posted: Tue Oct 21, 2008 5:26 pm
by canine
Code: Select all
#!/bin/bash
url="http://www.hacker.org/challenge/misc/minuteman.php"
temp="/home/brian/.temp_compare_page_file"
temp_md5sum="$temp.md5"
curl "$url" 2>/dev/null > $temp
if [ ! -e $temp_md5sum ]
then
md5sum $temp > $temp_md5sum
else
if ! md5sum --status -c $temp_md5sum
then
cp $temp /home/brian/Desktop/OHSHIT
else
rm $temp
fi
fi
Basically, each time this script is run, it fetches the web page in question, and compares the md5sum of the file to an existing md5sum. If it doesn't match, it copies it to my desktop and renames it OHSHIT.
After a few hours of running every minute, OHSHIT magically popped up on my desktop.
I used cron to schedule it.
Posted: Tue Oct 21, 2008 6:18 pm
by MerickOWA
Thats similar to what I did, only I used 'diff' and 'webget' and just put a sleep in a loop
Posted: Tue Oct 21, 2008 9:02 pm
by canine
MerickOWA wrote:Thats similar to what I did, only I used 'diff' and 'webget' and just put a sleep in a loop
TMTOWTDI
Posted: Wed Oct 22, 2008 4:26 pm
by m!nus
i did a php script that just compared the string in the HTTP response.
if it is it searches for "is" and ":" and sends the word after it as solution. it almost worked when i used it except the fact that i forgot to remove the space after "is"/":"
Posted: Thu Oct 23, 2008 4:34 am
by falcon2424
I just had a cron job "wget filelocation" from a temp directory.
Then I just had to ls -al in the directory, and looked for the file with the different filesize.
Posted: Sat Oct 25, 2008 5:03 am
by the_impaler
I did it in windows - used "for /L" loop to change the time and lwp-request to get page and piped it to "findstr /V"
Posted: Mon Oct 27, 2008 3:59 am
by beginning
I used Unity3D (hope that doesn't disqualify me) with this code:
Code: Select all
var url : String;
var minutes = 0;
function Start () {
while (true) {
minutes += 1;
print("Refreshed. Minutes: " + minutes);
var www = new WWW(url);
yield www;
if (www.data != "<html><body>\nback later") {
print (www.data);
break;
}
yield new WaitForSeconds(40);
}
}
Posted: Mon Oct 27, 2008 10:46 pm
by sigi
I let the following ruby program run overnight:
Code: Select all
require 'net/http'
text = nil
begin
text = Net::HTTP.get_response('www.hacker.org','/challenge/misc/minuteman.php').body
sleep 10
end while text =~ /back later/
File.open("answer.txt", "w") { |f| f.puts text }
Then I picked up the answer in the file the next day.
I'm exploiting the fact that the begin/end block is ran at least once (even though the 'text' variable is still 'nil' before the first iteration). This is worth mentioning because it wouldn't work if the while condition was attached to a single statement/expression.
Posted: Tue Nov 04, 2008 6:36 am
by eike42
At what time was yours? Mine was about 5am
Posted: Mon Nov 17, 2008 11:42 am
by simon
here my answer in sh. at least no files are needed...
Code: Select all
answer="back later" ; while [ "$answer" == "back later" ] ; do answer=`wget http://www.hacker.org/challenge/misc/minuteman.php -O- 2> /dev/null | sed 's|<.*>||g' | sed '/^$/d'` ; echo -n "." ; sleep 45 ; done ; echo "Answer: $answer"
Just Batch
Posted: Tue Jan 06, 2009 10:11 am
by macdachs
:start
wget
http://www.hacker.org/challenge/misc/minuteman.php
wait 10
goto start
Wget renames automaticly the file. After 24 hours i sorted the files to there size: 4.52h
Posted: Fri Feb 06, 2009 12:45 pm
by aurora
this is my php script i ran on one of my servers using nohup. in the meantime i logged out and went to sleep.
Code: Select all
#!/usr/bin/env php
<?php
set_time_limit(0);
do {
sleep(30);
$c = file_get_contents('http://www.hacker.org/challenge/misc/minuteman.php');
} while(strpos($c, 'back later') !== false);
print $c;
?>
Posted: Fri Feb 06, 2009 6:40 pm
by Belriel
Mine was almost the same as aurora's. Running with PHP on command line after some time I had the answer written in the console. But I needed several days because the script stopped execution a few times due to network timeouts.
Code: Select all
<?php
$url = "http://www.hacker.org/challenge/misc/minuteman.php";
$text = file_get_contents($url);
while(strpos($text,"back later") != false){
$file = @file_get_contents($url);
if($file != false) $text = $file;
sleep(30);
}
echo $text;
?>
Posted: Thu Feb 12, 2009 8:03 am
by rajahten
I used this python script, which i made.
Guess i could've made it with less code, but i'm still learning
Code: Select all
import urllib
import time
proxy = {'http': 'proxyurl'}
opener = urllib.FancyURLopener(proxy)
finished = False
compare = 'back later'
while not finished:
f = opener.open("http://www.hacker.org/challenge/misc/minuteman.php")
string = f.read()
s = string[13:]
if s != compare:
print 'content displayed at ' + time.ctime(time.time()) + ' is: ' + s
file = open('tempdump.txt', 'w')
file.write('the content displayed at ' + time.ctime(time.time()) + ' is: ' + s);
file.close()
finished = True
else:
time.sleep(58)
Posted: Fri Feb 13, 2009 6:05 pm
by masgo
i let cron start a bash-script every minute
the script got the page with wget and then did a diff with the default "come back later" page. is the diff wasnt equal the output of diff was written to a file.