Basic strategy: how do you get into the top 10?
Basic strategy: how do you get into the top 10?
Hi!
I'd like to ask whether any of the top scorers in this puzzle would be willing to give some hints as to how their solutions work.
I started out with a straightforward brute force approach; this got me to about level 50, when it became clear it was too slow. Technical optimizations and a multi-threaded implementations only got me a few levels more. Then I implemented first one, then another method to identify unworkable solution paths early, in order to reduce the branching degree. This resulted in massive speed improvements, but currently, at level 130, it looks like it's not enough to get me much further, certainly not to level 300 or 600 as some of you have managed.
So how do you do it? Do you use the same basic approach and have simply found more and better ways to eliminate unworkable branches? Or do you use a fundamentally different approach?
I tried using a hashtables to find recurring patterns, but they were so rare it didn't help. I also thought about partitioning the board into "territories" that have to be connected so that complexity would grow only with the number of territories (or possibly their entry points) rather than the much larger number of cells, but I'm pretty sure that can't work, since an entry point into such a territory can be eliminated by the coil passing it, thus making any algorithm depending on such entry points flawed.
I'd like to ask whether any of the top scorers in this puzzle would be willing to give some hints as to how their solutions work.
I started out with a straightforward brute force approach; this got me to about level 50, when it became clear it was too slow. Technical optimizations and a multi-threaded implementations only got me a few levels more. Then I implemented first one, then another method to identify unworkable solution paths early, in order to reduce the branching degree. This resulted in massive speed improvements, but currently, at level 130, it looks like it's not enough to get me much further, certainly not to level 300 or 600 as some of you have managed.
So how do you do it? Do you use the same basic approach and have simply found more and better ways to eliminate unworkable branches? Or do you use a fundamentally different approach?
I tried using a hashtables to find recurring patterns, but they were so rare it didn't help. I also thought about partitioning the board into "territories" that have to be connected so that complexity would grow only with the number of territories (or possibly their entry points) rather than the much larger number of cells, but I'm pretty sure that can't work, since an entry point into such a territory can be eliminated by the coil passing it, thus making any algorithm depending on such entry points flawed.
I'm not much further through than you are, and I doubt many other people have solved this puzzle this far in the way I did, but... When I found I could solve them by hand faster than my automated algorithm, I just did that, after making a nicer UI than the flash one (e.g. backtrack support). I got to level 160, I think (wherever I am now, not sure exactly) before getting distracted with other things to do. It was fun, though.
I found a lot of the things I implemented to detect failure sooner were actually slower to calculate than just continuing with the brute force. But this is one case where I think the data layout in my Python program was poor. e.g. for Runaway, I changed the data structures as well as the algorithm from time to time, and it made a big difference there.
I did have some good ideas for better ways to reduce the complexity and make larger boards more efficient to solve, but never implemented any of them. They're based on things I found myself doing mentally when solving by hand. The best idea I had was partitioning, as you said - I think it's very practical, if you do it right - but I haven't had time to implement anything.
The thing I've always found awkward with Mortal Coil is the endpoints, and generally the fact that the order of path segments matters. The endpoints make it hard to apply certain prunes, e.g. creating a dead-end, because that dead-end might be the one you're meant to end on.
I found a lot of the things I implemented to detect failure sooner were actually slower to calculate than just continuing with the brute force. But this is one case where I think the data layout in my Python program was poor. e.g. for Runaway, I changed the data structures as well as the algorithm from time to time, and it made a big difference there.
I did have some good ideas for better ways to reduce the complexity and make larger boards more efficient to solve, but never implemented any of them. They're based on things I found myself doing mentally when solving by hand. The best idea I had was partitioning, as you said - I think it's very practical, if you do it right - but I haven't had time to implement anything.
The thing I've always found awkward with Mortal Coil is the endpoints, and generally the fact that the order of path segments matters. The endpoints make it hard to apply certain prunes, e.g. creating a dead-end, because that dead-end might be the one you're meant to end on.
-
- Posts: 144
- Joined: Fri Mar 28, 2008 11:29 pm
- Location: #hacker.org on Freenode
(I'm actually below you, but that doesn't mean I still can't have points ;P)
As gfoot said, having herustics can help, but many of them will actually take more time to compute. On the ohter hand, there are some really great ones that are pretty quick to check.
Dead ends are pretty interesting. Its great if you can find one: even better if there's 2.
Although I haven't actually implemented it yet: I think that partitions are possible. The trick is in how you create them & define them. On one prior puzzle I was able to mentally make partitions when I manually tried to solve it: but I never realized what I was doing.
I don't want to reveal too much, though
Good luck.
As gfoot said, having herustics can help, but many of them will actually take more time to compute. On the ohter hand, there are some really great ones that are pretty quick to check.
Dead ends are pretty interesting. Its great if you can find one: even better if there's 2.
Although I haven't actually implemented it yet: I think that partitions are possible. The trick is in how you create them & define them. On one prior puzzle I was able to mentally make partitions when I manually tried to solve it: but I never realized what I was doing.
I don't want to reveal too much, though
Good luck.
Hi,
In my opinion, partitioning is practical and very effective. I think it's a must-do in high levels.
And, as gfoot and therethinker say, finding the endpoints is another essence to do. And it works well together with partitioning. If you find two parts with endpoints inside, then all the other parts can be solved almost independently with few try-and-errors. That greatly saves the time.
I agree it's fun to solve this puzzle by hand. But it's getting just hopeless as the level getting higher.
I'm at level 604 now. It's because it was the last level when I solved it (now I see the next level ready on the server). I guess adum and ernie also reached the last level respectively at that time.
In my opinion, partitioning is practical and very effective. I think it's a must-do in high levels.
And, as gfoot and therethinker say, finding the endpoints is another essence to do. And it works well together with partitioning. If you find two parts with endpoints inside, then all the other parts can be solved almost independently with few try-and-errors. That greatly saves the time.
I agree it's fun to solve this puzzle by hand. But it's getting just hopeless as the level getting higher.
I'm at level 604 now. It's because it was the last level when I solved it (now I see the next level ready on the server). I guess adum and ernie also reached the last level respectively at that time.
heh, i wish i could claim that my solver only stopped when the site ran out of levels, but the truth is that my solver ran out of steam.
i don't think we should get too particular on solving techniques, because coming up with them is half the fun and challenge. but i will say that there are two very basic techniques that most people come up with independently, and they get you pretty far. bok and abes, for example, just use those two. my solver has a lot more stuff, and ernie's even more so.
i don't think we should get too particular on solving techniques, because coming up with them is half the fun and challenge. but i will say that there are two very basic techniques that most people come up with independently, and they get you pretty far. bok and abes, for example, just use those two. my solver has a lot more stuff, and ernie's even more so.
I see somewhat higher numbers for these levels:adum wrote:level 1000 is around 250x250. it goes up fast to level 1010, which is around 500x500. then it switches to tails' generator for levels 1011 to 1110, and starts at 100x100.
Level 1000 is 337x336.
From now on the pattern for size increase changes:
Level 1001 is 350x350.
Level 1002 is 380x380.
...
Level 1009 is 590x590.
Level 1010 makes another jump: It's 650x650.
Level 741 is 250x250 already.
(edit: corrected board sizes after reaching level 1010)
levels
i've done a few rewrites to this...
naive gets to around 50 or 60
rewrite 1 gets to around 250 (python) or 500+ (from what i know of the java / c solvers)
rewrite 2 gets to 500 or so (python) or the top of the score list (java / c)
and i'm hoping my level 3 will pass everyone, once memory use is fixed.
naive gets to around 50 or 60
rewrite 1 gets to around 250 (python) or 500+ (from what i know of the java / c solvers)
rewrite 2 gets to 500 or so (python) or the top of the score list (java / c)
and i'm hoping my level 3 will pass everyone, once memory use is fixed.
Re: levels
I'm curious: How long does your solver take to solve level 1058?ernie wrote:and i'm hoping my level 3 will pass everyone, once memory use is fixed.
My solver needs 3m10s for level 1058 and 4m35s for level 1059.
Starting ;)
When I have decided to automate the process, I started by reading the Mortal Coil threads.
I was really happy that the hard work is done by example bot. The " "->"\t\t\t" replacement was easy to debug out.
For start I changed path to qpath interface, than I have added backtracking to the random search to search all paths for the starting point. This allows me to shrink the set of possible starts
(parity excluding starts at some levels was implemented as first improvement).
I have added degree one square dynamic detection to stop searching in earlier stages.
This itself lead to solving levels till 160 overnight.
(OK there was surprise that qpath requires first direction even on degree one starting square, and the search was several times stopped due to slow http request.)
I am planning to add dynamic disconnection detection as the next easy search stop in earlier stages.
The main problem of the search is lack of sharing information among different starting points.
I am tending to rewrite the code not to work on grid, but to work on graph based on grid, but with path shortcuts ... researching paths with all vertices with degree 2 is surely waste of time. This should be done well to allow extensions based on MortalCoil rules forced paths (as I would do manually)... so probabbly the edges will be directed. I expect in later stages the shortcut edges would require decision labels as well. (But this requires some assumptions about starting points).
BTW: I have proved that finding Hamiltonian path on a subgrid is NP complete (Tron AI challenge) ... based on Plasnik's prove that it's NP complete for planar deg 3 graphs. I was not thinking yet about NP completness of the MortalCoil. In the case of possivie answer ... there would be interesting generator of difficult levels. ... Now probably transformation from directed version of Plesnik's constructions would be easier.
... Hmm, either I am blind or it's difficult/impossible to make such a construction ... it could add interesting insides to the easines of solvability I should look carefully at some levels for inspiration.
... I have added zero degree detection and heuristic for chosing starting point rather near places which were rarely visited on best fills from already tested starting points. This allows me to reach level 168 from level 0 in 2 and half hour. Not starting inside paths of degree 2 have not improved the solving time significantly. Disconnection detection and graph representation were not implemented yet.
Wow, lvl 170 ... not done after 9 hous of computation ... good time for next level of codding.
Hmmm, unconditional shortcuts gave small if any improvements so I had to add shortcuts based on assumption start is not in given region ... I hope it will help a lot, but one should be carefull in maintaining the assumption sets well ... dynamic discontinuity log^2 algorithm still not incorporated, even when no leaf producing branching factor of level 170 leads to long computations even from single start. I bet implementation of conditional shortcuts would improve the search decisions and speed much more so this was delayed.
Hmm, its unbelievable how many bugs I made when converting to conditional shortcuts ... I am debugging it for about a month and there are still bugs. Of course some of them mean better problem understanding, but I have expected to receive next unbuggy version faster to be able to start thinking about further improvements. ... no crossing solutions yet ... no discontinuity tests during the search, no "dead end tests of subgraphs", but better start/end sets filtering.
Now it fails on level 88. BTW: It's slower on almost all levels than the search which lead me to solving lvls upto 174. I hope it's thanks to preprocessing, which will pay on higher levels.
OK level 88 shows how carefull one should be when using conditional shortcuts ... the correct solution has end in conditional shortcut assumption set (on neighbouring shortcut). And the solution fill goes opposite direction through a modified path ... this almost says the conditional shortcuts could rarely be used...
Now when I think about implementing connectivity constraints, I am tending to use EulerTrees of Monika Rauch Henzinger and Tarjan variant of Top Trees of Mikkel Thorup uff ... that would use shortcuts implicitly and I could use constraints for conditional shortcuts to stop further searching if violated. The idea is to use TopTrees to detect first branching vertex on path in O(log n), and maintaining ET for a minimum spanning tree would garantee the connectivity. Search would "delete" unused nontree edges. Maintaining edge levels (and chain of (log n) ET forests) and maintaining invariant that ET tree on level i as at most n/2^i vertices finding replacements of tree edges using amortization argument that levels of edges could only increase ... and pays work with them ...
Removing as much as able edges before making next decision should lead to pruning faster...
OK ... I find next bug ... passed the 88 level and got stuck on 99. The conditional shortcuts works better then the original search on most levels till 99. The preprocessing on 99 could surely be improved.
Uff, seems after almost 2 months of debugging I got next working version ... with region path search, but without combining neighbouring regions. I have to let it run for a while before I start introducing next bugs ... advanced data structures not implemented, representation of sets is far from optimal ...
I was really happy that the hard work is done by example bot. The " "->"\t\t\t" replacement was easy to debug out.
For start I changed path to qpath interface, than I have added backtracking to the random search to search all paths for the starting point. This allows me to shrink the set of possible starts
(parity excluding starts at some levels was implemented as first improvement).
I have added degree one square dynamic detection to stop searching in earlier stages.
This itself lead to solving levels till 160 overnight.
(OK there was surprise that qpath requires first direction even on degree one starting square, and the search was several times stopped due to slow http request.)
I am planning to add dynamic disconnection detection as the next easy search stop in earlier stages.
The main problem of the search is lack of sharing information among different starting points.
I am tending to rewrite the code not to work on grid, but to work on graph based on grid, but with path shortcuts ... researching paths with all vertices with degree 2 is surely waste of time. This should be done well to allow extensions based on MortalCoil rules forced paths (as I would do manually)... so probabbly the edges will be directed. I expect in later stages the shortcut edges would require decision labels as well. (But this requires some assumptions about starting points).
BTW: I have proved that finding Hamiltonian path on a subgrid is NP complete (Tron AI challenge) ... based on Plasnik's prove that it's NP complete for planar deg 3 graphs. I was not thinking yet about NP completness of the MortalCoil. In the case of possivie answer ... there would be interesting generator of difficult levels. ... Now probably transformation from directed version of Plesnik's constructions would be easier.
... Hmm, either I am blind or it's difficult/impossible to make such a construction ... it could add interesting insides to the easines of solvability I should look carefully at some levels for inspiration.
... I have added zero degree detection and heuristic for chosing starting point rather near places which were rarely visited on best fills from already tested starting points. This allows me to reach level 168 from level 0 in 2 and half hour. Not starting inside paths of degree 2 have not improved the solving time significantly. Disconnection detection and graph representation were not implemented yet.
Wow, lvl 170 ... not done after 9 hous of computation ... good time for next level of codding.
Hmmm, unconditional shortcuts gave small if any improvements so I had to add shortcuts based on assumption start is not in given region ... I hope it will help a lot, but one should be carefull in maintaining the assumption sets well ... dynamic discontinuity log^2 algorithm still not incorporated, even when no leaf producing branching factor of level 170 leads to long computations even from single start. I bet implementation of conditional shortcuts would improve the search decisions and speed much more so this was delayed.
Hmm, its unbelievable how many bugs I made when converting to conditional shortcuts ... I am debugging it for about a month and there are still bugs. Of course some of them mean better problem understanding, but I have expected to receive next unbuggy version faster to be able to start thinking about further improvements. ... no crossing solutions yet ... no discontinuity tests during the search, no "dead end tests of subgraphs", but better start/end sets filtering.
Now it fails on level 88. BTW: It's slower on almost all levels than the search which lead me to solving lvls upto 174. I hope it's thanks to preprocessing, which will pay on higher levels.
OK level 88 shows how carefull one should be when using conditional shortcuts ... the correct solution has end in conditional shortcut assumption set (on neighbouring shortcut). And the solution fill goes opposite direction through a modified path ... this almost says the conditional shortcuts could rarely be used...
Now when I think about implementing connectivity constraints, I am tending to use EulerTrees of Monika Rauch Henzinger and Tarjan variant of Top Trees of Mikkel Thorup uff ... that would use shortcuts implicitly and I could use constraints for conditional shortcuts to stop further searching if violated. The idea is to use TopTrees to detect first branching vertex on path in O(log n), and maintaining ET for a minimum spanning tree would garantee the connectivity. Search would "delete" unused nontree edges. Maintaining edge levels (and chain of (log n) ET forests) and maintaining invariant that ET tree on level i as at most n/2^i vertices finding replacements of tree edges using amortization argument that levels of edges could only increase ... and pays work with them ...
Removing as much as able edges before making next decision should lead to pruning faster...
OK ... I find next bug ... passed the 88 level and got stuck on 99. The conditional shortcuts works better then the original search on most levels till 99. The preprocessing on 99 could surely be improved.
Uff, seems after almost 2 months of debugging I got next working version ... with region path search, but without combining neighbouring regions. I have to let it run for a while before I start introducing next bugs ... advanced data structures not implemented, representation of sets is far from optimal ...
After solving most of other puzzle series, I have returned to MortalCoil (after 2 years). I have found at least 4 bugs in the previous version of code and now it seems it is not buggy, but only slow.
I am not happy with the conditional constraints, but I have used them during the search ... I do "iterative deepening" on number of failed constraints as I assume this number remains small even on really huge boards. There is still a lot of work in improving the preprocess ... reducing the number of required assumptions, reasoning improvements in corridor/regions interactions ...
Seems the puzzle is not dead as some people made progress recently.
Adum among them!
seems 233 is the level forcing rewrite ... some early cuts during the search from chosen startpoint are required. Precomputation is much faster then the search so precomputation at time of each decision (even during search) and organisation of information allowing fast undo of the decission ... is the direction I have to think now.
I am not happy with the conditional constraints, but I have used them during the search ... I do "iterative deepening" on number of failed constraints as I assume this number remains small even on really huge boards. There is still a lot of work in improving the preprocess ... reducing the number of required assumptions, reasoning improvements in corridor/regions interactions ...
Seems the puzzle is not dead as some people made progress recently.
Adum among them!
seems 233 is the level forcing rewrite ... some early cuts during the search from chosen startpoint are required. Precomputation is much faster then the search so precomputation at time of each decision (even during search) and organisation of information allowing fast undo of the decission ... is the direction I have to think now.