Sometimes good things come your way, carpe-diem!
Assumption is in the mother of all fuckups. Find out for yourself.
Didn’t your mother tell you not to gang up on the other kids?
So, I did a bit of reading, and wrote a little solver. Will solve most puzzles instantly, and all puzzles with a unique solution in under
thirty ten seconds. Not bad for a days work. Doesn’t even need multiple CPU cores.
I notice that Knuth has a clever way to do it a thousand times faster. I have not represented all the constraints explicitly….
Answers to questions:
- Not necessarily, but for human solvable problems, the decision tree is pretty constrained. There may be heuristics I don’t know about.
- Depends on initial constaints, sometimes yes, sometimes no.
- That’s the next big thing I have to figure out. It looks much more tricky.
- As one passes the point where there is not enough information to find a unique solution, for every digit you remove from the initial configuration, the solution space goes up by a factor of between 10 and 100 roughly.
- With no initial constraints, taking symmetries into account, the state space is five billion. Not taking symmetries into account it’s one trillion times that again.
- Is there an initial configuration of the board which looks sensible, but has a logical inconsistency so there aren’t actually any solutions.
- In the puzzles posed by newspapers (at least the easy ones I have done), there is normally always a fully constrained position for every next move. Is this always the case?
- If there isn’t a fully constrained position for every next move, are all subtrees except one blind alleys, or not?
- How does one generate a puzzle where one knows that there are always enough constraints to make a reasonably unambiguous decision?
- In the case of underconstrained configurations how large is the state space?
- In the case of no initial constraints (no numbers at all), how large is the state space of all possible solutions?
Looking at making further progress on my DB engine. In no particular order:
- Case insensitive string indexes. DB needs to be able to handle changes in index order.
- Removal of current index tag scheme, with more complete “index combobulation” info, able to handle more than just A-B buffering.
- Complete transactionality, where metadata and data changes can be done in one user transaction: each user transaction consists of multiple changesets. It should then be able to roll forward / roll back multiple sets of changesets to support existing commit / abort functionality. (Pipeline depth).
- Support for multiple transactions in progress at the same time: instead of A-B buffering, we have Current (Next1, Next2, Next3) buffer sets. Amount of concurrency can be specified at transaction start time. (Pipeline width).
Once I’ve got all that done, then it should be possible to wrap the local DB logic into a distributed database. This will require a “journal of journals”, indicating the various transaction history paths that each each copy of the database has gone through. Depending on consistency level required, you could then do “last write wins” or “cascading abort” semantics, depending n what you want to do.
So I have now made a little check-in app. It asks you to register your e-mail and a contact e-mail. Once you’ve done that, you need to login or use the quick check-in link once per day. If you don’t (maybe coronavirus has got you, or you’re not well), then it automatically sends a mail to the contact address you specified.
You can try it out here.
N.B Some web browsers will complain about “invalid security certificate”. I like encrypting and keeping your passwords safe, but can’t be arsed to register the website certificate just for a demo, so if you want you proceed, click “Advanced” or “Details”, and then “Proceed to website”.
So, I’ve been working on my latest top secret project. In summary:
- Progress pretty good, but didn’t do much work on it over the hols.
- New bespoke database engine pretty good (definitely fast enough!)
- It does Instagram and filters posts and users quite nicely.
- A bit more work needed to present tweets & twitter data: They’re in the database engine, but more work needed on the display.
- After basic filtering is done, working on more advanced analytics: graphs of who posts / replies to who, volumes of posts, hashtags etc.