↓
 

Mike Vermeulen's web

computers, bicycle travel and other stuff

  • Home
  • Blog
  • Bike Trips
Home

What brought me to Texas

Mike Vermeulen's web Posted on March 22, 2022 by mevMarch 22, 2022

On March 22, 2012 AMD employees in the Portland office were gathered for a mandatory meeting. It was announced the office would be closing by end of the year. It was a surprise to most, though as a manager I was given early notice the preceding November with understanding I keep things quiet until announcement.

It was still a bit of a shock as I enjoyed our small compiler team, and enjoyed living in Portland. I was fortunate to be given a choice between two alternatives.  I could either:

  • Stay with AMD and accept an offer for a company move and job in the San Francisco Bay Area or
  • Leave AMD with a severance package.

The advance notice had given me some extra time to anticipate and think through things.  I had even quietly made my own trip to San Jose in February to see what it might be like to live there again – particularly if I still decided not to own an automobile.

After further consideration, I decided to reject both offers and come up with my own counter-proposal.  What if I:

  • Stayed with AMD, but instead moved to the Austin office?  Also by the way, I could do the move instead of having the company pay for it.

While I didn’t disclose them, I had my own reasons behind my proposal. I had lived in the Bay Area previously between 1997 and 2001.  It was an OK place to be, though would be a little more challenging without a car.  Instead I was ready to try something new and Austin seemed interesting to try out.  If I didn’t like it there, I could always go somewhere else after having tried it out.  In addition, I already had plans to spend the first half of 2013 bicycling across Africa – so Austin would be a slightly better place to spend the second half of 2012 than Sunnyvale.

I was again fortunate that my offer was accepted.  I decided to move at start of July to give myself a six month trial run in Austin.  I arranged my own U-Haul trailer for a move.  However, instead of moving the majority of my belongings to Texas, instead I placed them in a rented storage locker in Fort Collins, Colorado.  This let me rent a smaller apartment in Austin and wait until after the Africa trip to finalize any move.  I bade farewell to Oregon and placed my condo on the market.  It was a bit sad because aside from Colorado, Oregon was one of the places I most enjoyed.

In the moment of March/April 2012 it was hard to decide on my choices – but as I look back, I am happy with the choice I made. Austin has been an OK place, particularly for months with the letter “R”.   There is enough outdoor stuff, an interesting city and a different place to be – even if I still sometimes think of myself as a Colorado kid and complain through July/August heat.  More importantly, I’ve enjoyed working for AMD and been engaged and challenged with my work.

Posted in reflections | 2 Replies

Wordle solvers – updated

Mike Vermeulen's web Posted on February 13, 2022 by mevFebruary 14, 2022

Last month, I posted a blog posted on approaches to creating an automatic solver for Wordle.

After that posting, there was also a Five Thirty Eight riddler question about the same topic by Zach Wissner Gross.  In Zach’s original posting he referenced code and code and examples done by Laurent Lessard.  In Lessard’s work, including this blog post Lessard ran experiments trying several different techniques for creating an optimal approach, e.g.

  • Picking the guess that created the most choices (most buckets)
  • Picking the guess that minimized the size of the largest bucket
  • Picking a guess that maximized entropy

Experimental results suggested that the first of these was slightly better than the other two alternatives.

A week later, Zach Wissner-Gross posted the best solution received a solution credited to Jenny Mitchell.  Either from Jenny or from Zach, this solution posted also turned the problem into a lookup table with each element of the tree having a designated solution.  The declared victor was essentially a variation of the “maximize splits” approach 

I was playing a little further with these solutions and trying to understand the search space.  As discussed in previous post, each word splits the candidate mystery set into up to 238 unique buckets (3^6 minus five invalid choices) and in practice up to 150 of these buckets populated using the word TRACE.  As Jenny Michell’s solution and Laurent Lessard’s experiments show, this is indeed a very good solution approach.  However, I am skeptical whether it is the absolute best approach because of two reasons:

  1. In the context of creating a lookup-table, it is possible that different approaches could create just slightly more optimal approaches for parts of the search subtree.
  2. The splits themselves are also influenced by the choices of the valid guess words.  There are quite a few, but it is not fully populated.

Zach made one statement in arguing for a maximizing splits approach that I am not convinced is true:

It didn’t matter how many words were in each constellation, just the total number of constellations.

To think of why, consider the following thought exercise of two hypothetical ways in which a a group of 20 candidates could be split by a guess into 10 buckets.

  1. One method would be word just happened to split this into 10 buckets of two elements each. In this case, the expected number of guesses would be 2.5 – one for the initial split into 10 buckets and then for each of these buckets one could either guess correctly (1 more guess) or incorrectly (2 more guesses) thus- 1 + (50% * 1) + (50% * 2) = 2.5
  2. A second method would be a word that creates 9 buckets of one element each, and a remaining bucket of 11 elements. In this case the expected number of guesses would be one for the initial split into 10 buckets and then all the buckets with one element could be guessed in the next guess – and the remaining ones however long it takes to split out the bucket of 16 – thus – 1 + 45% * 1) + (55% * x).

There is no inherent reasons what these two calculations have to be the same. Some notion of entropy can still play a role, even if maximizing entropy is not the best solution. Hence, I believe Zach’s statement above is a good description of the approach, but not guaranteed to find the absolute best solution.

So I created a metric that was a little more entropy based instead of purely the number of buckets or size of the largest bucket. I considered this as the “weighted size” of the bucket. In the example above, 55% of the mystery words are in the bucket with 11 elements and 45% of the mystery words are in a bucket with 1 element. So the weighted size is (55% * 11 + 45% * 1) = 6.05. This is in comparison to the weighted size of (100% * 2) = 2.0. So there is more entropy in the buckets that are all split out, than one that is still more concentrated in a larger bucket.

Note: That in general, I would guess that a lower weighted size should help – however, interestingly enough a bucket of size 2 is likely among one of the least efficient choices – so higher entropy might not always be the best solution in some of these small cases.

When I print a list of the top 50 choices that minimize this weighted size, I can also print their number of buckets as well as the largest bucket sizes in the table below:

num	word	bucket	max	ones	wsize	eguess
1	roate	126	195	23	60.42	3.31
2	raise	132	168	28	61.00	3.28
3	raile	128	173	22	61.33	3.28
4	soare	127	183	22	62.30	3.24
5	arise	123	168	26	63.73	3.26
6	irate	124	194	14	63.78	3.28
7	orate	127	195	28	63.89	3.30
8	ariel	125	173	21	65.29	3.33
9	arose	121	183	23	66.02	3.30
10	raine	129	195	27	67.06	3.24
11	artel	128	196	27	67.50	3.27
12	taler	134	196	24	67.74	3.29
13	ratel	134	196	25	69.84	3.29
14	aesir	116	168	32	69.88	3.33
15	arles	108	205	14	69.89	3.34
16	realo	112	176	20	69.95	3.36
17	alter	128	196	25	69.99	3.31
18	saner	132	219	33	70.13	3.26
19	later	134	196	34	70.22	3.33
20	snare	132	219	32	71.10	3.18
21	oater	128	195	38	71.25	3.38
22	salet	148	221	34	71.27	3.22
23	taser	134	227	31	71.28	3.26
24	stare	133	227	23	71.29	3.19
25	tares	128	227	26	71.54	3.24
26	slate	147	221	29	71.57	3.22
27	alert	131	196	25	71.60	3.24
28	reais	114	168	24	71.61	3.35
29	lares	118	205	22	71.74	3.33
30	reast	147	227	29	71.77	3.15
31	strae	125	227	16	71.85	3.19
32	laser	123	205	27	72.12	3.33
33	saine	136	207	25	72.59	3.25
34	rales	117	205	21	72.80	3.34
35	urate	122	202	25	72.83	3.31
36	crate	148	246	30	72.90	3.17
37	serai	110	168	20	72.92	3.30
38	toile	123	204	20	73.04	3.23
39	seral	128	205	24	73.08	3.17
40	rates	118	227	24	73.33	3.30
41	carte	146	246	30	73.52	3.21
42	antre	134	226	31	73.94	3.25
43	slane	133	225	25	73.99	3.19
44	trace	150	246	32	74.02	3.16
45	coate	123	192	22	74.51	3.22
46	carle	144	249	37	74.68	3.23
47	carse	139	237	26	74.83	3.20
48	stoae	110	177	18	74.90	3.26
49	reals	116	205	21	74.94	3.27
50	terai	113	194	17	75.14	3.27

The word “TRACE” is 44th on this list, even though it has the highest value for the number of of buckets in column #3. Similarly, if you instead want to pick the words with lowest maximum bucket size, a word like RAISE is lowest in column #4 but numbers in this column are not always the lowest expected weighted bucket size.

I am not convinced that a increase the entropy situation uniformly is best choice, any more than I am convinced a maximize buckets or maximize spread is optimal for the entire search tree. Instead, I think it can be slightly influenced by subtleties of particular trees as well as counts.

Looking at weighted numbers of guesses, recursively

To look at potential approaches and expectations, I decided to look at this inductively. In the end, we are creating a search tree, where the tree is composed of nodes ranging from one or more elements. What might be the expected number of guesses for these nodes and how can the be accumulated recursively?

  • One of the bottom building blocks are nodes with one element. These are also listed in column #5 of the table above under the heading “ones”. There are not very many of them near the root of the tree, only 32 of them at a first level guess of TRACE or ~1% of the 2315 cases. If you are fortunate enough to reach one of them, it takes one more guess to confirm. However, as you go to lower levels of the tree, I also expect the percentage of buckets with one element to increase.
  • Another building block can be an node with two elements. A described above, one can flip a coin get the right one ~50% of the time, so expected guesses are 1.5.
  • Next are nodes with three elements. An average expectation could be to guess this in two guesses – using one of two possible approaches. Either you try the choices in this bucket one at a time and (1/3 * 1) + (1/3 * 2) + (1/3 * 3) and have expectation of 2 guesses to solve – or you find something that splits the three choices into two and pick in the next guess. If you are lucky one of the two guesses might split the other two if incorrect.
  • Nodes with four or five elements might be even more “efficient” than nodes with three – since there is a larger probability of having a word that can split these into exactly four or five choices. At least this is what I’ve observed.
  • As the number of elements within a bucket increases, one gets into more situations where the problem instead gets solved recursively. Find the expected guesses and then break the problem up recursively.

So I created an example that created a search tree in this more dynamic fashion. When the node has only two elements, don’t find a pick that evenly splits it into two buckets to maximize the split (or to minimize the bucket size) – but instead try guessing one of the choices and if that wasn’t correct – pick the other. Hence, using a strategy with a 1.5 expected count rather than 2 expected count…

I did this recursively, and when a node had too many elements in it, then rather than an exhaustive search for the absolute best candidate – I used a rough heuristic (wsize in my case, but it could have been number of buckets or size of largest bucket) – and tried the best candidate. I think my rough heuristic can do a reasonable job – but I don’t claim to be absolutely optimal because there is always a chance that further down on the list might just be slightly better.

Looking at the table above, I notice that TRACE does pretty well within this overall list if used as a first choice, even if followed up later with a different heuristic. I am also struck that there is a word REAST at number 30 on the list that might be even slightly better as this first choice, even though it is not the best choice in terms of other metrics like maximal number of splits, smallest bucket size or weighted bucket size.

Instead, REAST looks like it might have almost as many buckets as TRACE (247 vs 250) but then also a slightly smaller maximum bucket size (227 vs 246).

Summary

I am not claiming that I have found the best solution for solving Wordle.

Instead, I am suggesting that other claims of ideal might not as strong. With a lookup table approach, I do think one could find a case that optimizes. However, rather than a single heuristic for creating all of that table, it may be the case that parts of the search tree use slightly different heuristics.

In the table above, the last column eguess is an estimated guess derived recursively and weighted by the number of elements in each node along the way. There is possibility my calculations are slightly off, but I’ll also observe numbers slightly lower than those presented in FiveThirtyEight.

I expect the largest form of discrepancy is for nodes with two elements. A naive approach of “maximize spread” solves this in two guesses, whereas an approach of try one and if it isn’t right try the other solves it on average in one and a half.

The second potential discrepancy isn’t as strong but comes from examining expectations for REAST vs. TRACE. These might suggest that it is possible to pick particular choices optimized for different parts of the tree (in this case the initial guess, but it could also be lower down). Undoubtedly maximizing spread and minimizing largest buckets are good things to do and lead to strategies that perform very well. It is with that step after that of claiming “ideal” that I am questioning.

Two general notes made in the last blog post also apply. First, this experiment was done in the context of a search tree for exactly the 2315 possible words in Wordle. I expect the general techniques to still apply but results to be different if picking a larger set such as ~40,000 words in Google corpus that are five letters long.

Second note is that the way systems solve Wordle are different from how people solve Wordle. My observation is people often try to find initial letters to work from and go from there. Whereas a system it is a combinatoric elimination problem. This was reinforced by Wordle for 14 February where chosing REATH followed by DOILY narrowed the space to a single word in a way I’m sure I would still have no clue…

Posted in computers | Leave a reply

Thoughts on creating a solver for Wordle

Mike Vermeulen's web Posted on January 15, 2022 by mevJanuary 15, 2022

There is a new craze lately about a simple game named Wordle.  The idea is somewhat similar to the old game of Mastermind:

  1. There is a hidden five-letter word you try to guess.
  2. For each guess, you get feedback if a specific letter is: (a) correct and in the correct location (b) correct but in the wrong location or (c) not correct and not from another location.
  3. You have a total of six guesses to try the word.

A new Wordle is released each day.  Just in the last few weeks, I heard a lot more about this game – and also tried its web site on recent.

One of the posts on Facebook I had from a friend was asking for feedback on strategies people use for solving the same.  This led me to explore a bit more and also go down the path to think about  what might be involved in optimal guesses and whether I could create a solver for this.  This posting describes a four step build-up I consider for creating such a solver.  Some of these steps I’ve also done.

As I looked more, I also found some resources including how others have solved the same problem.  Rather than look in detail at their solutions – I decided I’d rather explore that first on my own.  However, I did get references to a few useful resources I can use for my construction (these coming from responses to that original Facebook post including links they made):

  • There is a Google corpus of words they found most common on the internet: https://www.kaggle.com/rtatman/english-word-frequency .  This contains 39,933 total five letter words.  That is more than those that would be accepted as valid guesses, but also useful in creating an algorithm that is more robust.
  • There are pointers to the list of 12972 words you are allowed to guess: https://docs.google.com/spreadsheets/d/1KR5lsyI60J1Ek6YgJRU2hKsk4iAOWvlPLUWjAZ6m8sg/edit#gid=0 as well as the 2315 actual words that can be in the solution: https://docs.google.com/spreadsheets/d/1-M0RIVVZqbeh0mZacdAsJyBrLuEmhKUhNaVAI-7pr2Y/edit#gid=0  It seems to me the actual mystery list of answers seems a bit like cheating, but willing to consider using the guessable list to find acceptable candidates.  However, focus first on creating something from the larger Google word list. 
  • There is an interesting variant to Wordle that keeps changing the word to make it maximally hard – https://www.pcgamer.com/absurdle-is-like-wordle-but-it-fights-back/  This Absurdle program seems like a good test proxy to make sure my program can find guesses in a minimal amount of guesses.

Using these resources – following are the four steps I’ve considered to creating a solver.

Step #1 – create solver aids

The idea here is to create some utility programs including one that can filter a word list to include only those that match a particular Wordle response.  For example the Wordle response says the second letter is “a” and it is in the correct location, the letter “y” is in the solution but not the first letter and letters “u”, “o” and “d” are not found … then filter the remaining words to match the guess.

I’ve implemented this filter and with it I can use the Google word corpus to help me filter the remaining lists to look for candidates in order of word frequency.  This helps me steadily narrow the list and look at candidates for potentially being the answer.

Note: Even though these are candidates for the actual solution, there could be reasons one might want to use other words as guesses.  The primary reason is if those other guesses might more efficiently rule in/out letters than just matching only for ones you already know.

Step #2 – create something that picks an optimal guess from the (remaining) word list

The idea here is to take a look at a word list, e.g. from the narrowed candidates, and pick a guess that might optimally separate them for further narrowing.

So how to do this? and what is an optimal guess?

My suggestion for this is based on an observation that there are 238 possible valid response codes that Wordle can provide as feedback.  In particular:

  • Each of the five letters guessed can have one of three response categories: [a] the guess is a match in the right location [b] the guess is a match but not in this location [c] the guess is not a match.  So 3x3x3x3x3 = 243 possible combinations of these responses.
  • Five of those combinations will never occur.  In particular: four of your letters are correct, the fifth is not and is in the wrong location
  • 243 – 5 = 238 possible responses

So, we can evaluate each guess based on the effect it has on the candidate list.  We can look at things such as “how many buckets are occupied” and “what is the maximal number of elements in any bucket”.  As an initial implementation, I’ll start with the “minimize the largest bucket” heuristic and see how well it does.

I have implemented such a guesser.  It can serve as basic framework for a solver.  Each round uses the same candidate words as guesses but applies to a steadily narrowing remaining list of candidates.

Note: Using the Google list of words as candidate guesses, I have a risk that Wordle itself doesn’t accept as valid choices.  The obvious answer to this would to instead use the Wordle candidate list – which is what I implemented by default in the prototype.

Step #3 – validate and tune this guessing logic by trying all possible combinations – as well as Absurdle

With a guesser created in step #2, one can evaluate performance for all possible combination of five letter hidden words.  This would be ~40,000 possible iterations of the guesser algorithm – to find the maximal possible chains of guesses.

While exhaustive search might not be the most elegant way of doing such validation, the total combinatorics here seem small enough that it is well in the range of current compute power.

I haven’t yet tried an exhaustive evaluation, but have tried Absurdle.  Following is a rough transcript of the guesses and end result

### Generate a list of mystery words "c0.txt" using five letter words in Google corpus.  ### Find the best candidate

prompt% ./five_letter unigram_freq.csv > c0.txt
prompt% ./guess -m c0.txt | sort -n +2 +3 | more

# Guess 'raise' and Absurdle tells me there are no matches

### Filter out the candidate list based on Absurdle response
prompt% ./filter -i c0.txt -g nobly -c '?????' -p '?????' > c1.txt

### Find the best candidate to narrow the remaining list
prompt% ./guess -m c1.txt | sort -n +2 +3 | more

# Guess 'nobly' and Absurdle tells me the 'y' at the end matches

### Filter out the candidate list based on Absurdle response
prompt% ./filter -i c1.txt -g nobly -c '????y' -p '?????' > c2.txt

# Find the best candidate to narrow the remaining list
prompt% ./guess -m c2.txt | sort -n +2 +3 | more

# Guess 'dempt' and Absurdle tells me the 'p' at 4th position matches

### Filter out the candidate list based on Absurdle response
prompt% ./filter -i c2.txt -g dempt -c '???p?' -p '?????' > c3.txt

### Find the best candidate to narrow the list (or can also guess twice)
prompt% ./guess -m c3.txt | sort -n +2 +3 | more

# Guess 'pagan' and Absurdle tells me the 'p' and 'g' are at wrong positions

### Filter out the candidate list based on Absurdle response
prompt% ./filter -i c3.txt -g pagan -c '?????' -p 'p?g??' > c4.txt

# Guess 'guppy and Absurdle tells me I successfully guessed in five guesses!

Step #4 – automate this to interact with the web page

I will likely never do this step, but the basic idea is to create an actual program that fetches the web page and provides responses.

There presumably are existing building blocks for doing most of this.  I have used one such tool named xdotool to do exactly that for a Linux-based Sudoku web page solver.  So it would be mostly building this out and seeing what falls out of testing.

Overview and thoughts on the exercise

There is more interest in creating a solver than actually using it.  What makes Wordle interesting is as a test of human memory to see what words we recall.  Reducing this to a more mechanical computer search exploits what computers can do well – but otherwise misses the human memory aspect.

The choice of word lists to start with can be important.  In particular the difference between the Google Corpus 39933 words is larger to narrow than if you cheat and choose only from the actual mystery list of 2315 words (somewhat cheating in my mind).

There might be refinements if one considers the specific match letter and not just a more general bucket.  For example, when Wordle tells me I have the third letter correct, I put that together in a single bucket regardless of whether the match is “a”, “b”, “c” or anything else.  Expanding out the cases of those matches would significantly grow the tree from my 238 valid possibilities.  I expect some of this is taken care of when I take steps to filter the remaining possibilities with using the match – but some algorithms like sum of squares might be more accurate taking into account the particular match.  I expect an indirect effect is to further emphasize the largest buckets.

The website 538 lists a challenge to see if you can create an algorithm for guessing in Wordle in three guesses or less and probability of achieving success. https://fivethirtyeight.com/features/when-the-riddler-met-wordle/ Haven’t calculated the probability yet, but here is how I would approach it:

  1. Use the guess chooser from step #2 above on only the mystery list and it tells me ‘raise’ has the smallest remaining bucket overall and ‘roate’ has the smallest sum of squares.  Use ‘roate’ as maximally spreading out the distribution.
  2. Once Wordle responds at most one bucket is going to have 195 words.  The sum of squares helped me spread this overall.
  3. Use the response from Wordle to filter the bucket chosen into just the remaining words.
  4. Repeat again for second and subsequent guesses

As far as the probabilities go, I’d have to code up the algorithm to see how quickly it could reach each of the 2315 mystery words – essentially a breadth wise search to see how quickly the tree expands.

The overall code was pretty simple to write, 270 non-commented source lines of C total.

Update: After creating my solution, I looked a bit more at other examples out there including this one: https://github.com/LaurentLessard/wordlesolver  referenced at 538.  My approach seems fairly similar except this author has more fully explored the search strategy tradeoffs.

Posted in computers | 1 Reply

Ruidoso

Mike Vermeulen's web Posted on February 8, 2021 by mevFebruary 8, 2021

I bought a condominium in Ruidoso, New Mexico.

The place has both an upstairs and a downstairs and they are not connected. Upstairs are two bedrooms, each with a full bathroom. The rest of the upstairs is mostly a “great room” that flows from dining, to living places. There is an alcove with a kitchen as part of the great room. There are nice large windows in front as well as a nice sized balcony. The downstairs is mostly unfinished space, with a half-bath, a washer and a dryer. There isn’t an inside connection between upstairs/downstairs but instead you walk down the stairway at right. Otherwise, a fairly basic place with dated appliances and older carpet but the price was right. The sellers were supposed to remove their furniture but appear to have been slow doing this – so expect charitable organization to be picking that up in next few days. It also looks like the sellers basically abandoned even personal things in the closets and the selling realtor didn’t do their side of the agreement to remove items. Sigh.

The condo is just slightly outside Ruidoso at 7500ft elevation. This makes for a more drier and more temperate climate that is 15 degrees cooler than Austin year round and for both high and low temperatures.

Temperatures are slightly milder than northern Colorado. Not quite as warm in the summer and not quite as cold in the winter. On average, Ruidoso gets about half the amount of snow each winter as northern Colorado. Annual moisture is ~17 inches vs. ~14 inches and more of a summer monsoon than spring snowstorms. As with other dry climates, it is sunny most of the time.

These deer were hanging around. My guess is people might be feeding them. I think there are bears and other wildlife in the area as well.

The greater Ruidoso region has a summer population of ~30,000 and has a bit of a vacation area feel to it. Lots of “cabins”, condos and building lots. An overcrowded downtown reminds me of Estes Park with a lot of touristy shops. I think the summer tourist season is more popular than winter. The nearest big cities are El Paso (140 miles), Albuquerque (180 miles), Lubbock (250 miles), Amarillo (290 miles). Ruidoso is ~560 miles from DFW and ~580 miles from Austin. The roads are good and if the weather cooperates, I can easily drive in a day.

My condo is about 15 miles from Ski Apache a small ski area on slopes of Sierra Blanca mountain. This year is apparently a dry year with less snow than normal but there were still some skiers. The road is paved, but narrow and winding as it climbs ~2000ft up to base of the ski area. Sierra Blanca summit is just slightly less than 12000 feet and is tallest and most prominent peak in southern New Mexico. https://www.summitpost.org/sierra-blanca/151784. Very little of the roads are flat, but should be some fun cycling up and down these hills.

Views from a lookout on the road up to Ski Apache show you can see pretty far. This is looking mostly off to the northeast. A lot of this area is Lincoln National Forest, with some hiking as well as campground areas. Directly to the south is mostly Mescalero Indian reservation. The overall uplifted area of mountains is about 20 miles wide and 60 miles long and is surrounded by lower dry desert areas. Ruidoso is in the northern part of this region and Cloudcroft is further south.

I’ll be interested in hiking up Sierra Blanca since it will be ~7500 feet higher than the surrounding areas such as White Sands National Park on the west side. In the foreground is Monjeau Peak at 9582ft – https://www.fs.usda.gov/recarea/lincoln/recarea/?recid=80034

A screenshot from Google Maps satellite view shows the rough location of this green blob in middle. To the left is the Rio Grande Valley between Socorro and Las Cruces. In the valley to the left are the White Sands as well as Malpai Lava Flow. There is another lower range between White Sands/Malpai and the Rio Grande. To the right is Roswell with some green farming areas. In between is this uplifted area with Sierra Blanca Peak highlighted. Just below the bottom right is a Carlsbad Caverns, a longer day trip from Ruidoso.

So why a condo in Ruidoso?

In the short run, as the pandemic continues, I figure it is nice to have an alternate place that is one long days drive from Austin. I’m getting high-speed internet installed the next day or two and then should be able to rent a car, drive up one weekend, stay for a week or two while working from Ruidoso and then drive back on a subsequent weekend. It also happens to be one long day drive to northern Colorado, so also a shorter distance and potential stopover point between CO and TX.

After we unwind from the pandemic, I’m not sure how much I’ll be able to work as much remotely, but should also be a spot for shorter vacation or jump off point for other adventures. I’m thinking of a three week trip in May. First week working remotely, then a week of bicycling through New Mexico and then another week of working before returning. Hopefully can escape during some summer times when Austin gets particularly hot/muggy.

After these next few years, who knows. Eventually it would be nice to make some extended bicycle travels for a year or two and have a home base to store stuff and occasionally visit between travels — I do see myself getting back to Colorado after that.

If some of these thoughts/plans don’t work out as well as expected, then can always sell again later. Prices were reasonable (my best guess is ~1/2 the cost per square foot of what my Austin town home is worth; Ruidoso is closer to prices in El Paso or Lubbock than Austin which is a hotter market). If the pandemic causes more people to work remotely from places like Ruidoso, this could become more popular as it is otherwise a nice but somewhat more remote area.

Both an interesting project to get Ruidoso condo set up and an interesting get-away for visit. This week is a first week setting things up while also mostly working remotely.

Posted in ruidoso | 3 Replies

Riding every day in November, Boulder edition

Mike Vermeulen's web Posted on November 27, 2020 by mevNovember 27, 2020


Just before Thanksgiving weekend, I drove to Boulder. Plan is to work from here until New Years and be a little closer to family over the holidays. To continue my “Riding Every Day in November” challenge, I decided to augment my Austin grocery story ride by cycling to every Safeway, King Soopers, Whole Foods and Sprouts in Boulder. Listed in the map above.

Posted in bicycling | 1 Reply

Riding every day in November

Mike Vermeulen's web Posted on November 1, 2020 by mevNovember 5, 2020

Entered a challenge to see if I could ride my bike every day in November. Along the way, also tracking to see how many Sprouts, Randalls, Whole Foods and HEB stores I might visit. Below is a map that tracks the stores I have visited.

Posted in bicycling | Leave a reply

Test photo from phone

Mike Vermeulen's web Posted on September 27, 2020 by mevSeptember 27, 2020
Austin yard sign
Posted in photo | Leave a reply

Web site set up

Mike Vermeulen's web Posted on September 20, 2020 by mevSeptember 27, 2020

A first blog post to see if the new site has been set up. This installation uses WordPress Multisite so I can host multiple sub-sites in the same.

On the menu I’ve added custom entries for a number of referenced site materials. I have also copied over most of my bike trip sites and then created a summary page of these bicycle trips including an overview table.

Posted in website | Leave a reply

Recent Posts

  • What brought me to Texas
  • Wordle solvers – updated
  • Thoughts on creating a solver for Wordle
  • Ruidoso
  • Riding every day in November, Boulder edition

Recent Comments

  • Paul Stoecker on Riding every day in November, Boulder edition
  • Srini on What brought me to Texas
  • Rob Stevens on What brought me to Texas
  • Wordle solvers – updated – Mike Vermeulen's web on Thoughts on creating a solver for Wordle
  • mev on Ruidoso

Archives

  • March 2022
  • February 2022
  • January 2022
  • February 2021
  • November 2020
  • September 2020

Categories

  • bicycling
  • computers
  • photo
  • reflections
  • ruidoso
  • website

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
©2023 - Mike Vermeulen's web - Weaver Xtreme Theme
↑