I recently published lisp-unit2 which is a major refactoring of the lisp-unit library. I am one of the biggest public users of lisp-unit (accounting for 15 of the 28 libraries depending on it in quicklisp). I also have some very large internal test suites for non-public applications. I have been using lisp-unit regularly for at least the past 5 years and perhaps longer. After such extended use, I finally had an issue that made me start refactoring lisp-unit. (I could not find where a “compiler error” message was being printed from, which ended up not even being lisp-unit’s fault, but lisp-unit didn’t help me find it). This compounded with years of “wouldn’t it be nice if lisp-unit did” made me finally go about fixing all my gripes.
The things I most wanted to change were:
- Lots of flags that are not obvious, that do odd things. I always ended up setting all of them to true from their default state, so having 3-4 flags that always need to be remembered to be set was strange
- No go-to-definition. I always want to be able to go to definition from the test name and couldn’t
- Tags/Suites. I always had wrapper macros that allowed me to organize my tests into suites. Eventually lisp-unit added tags, but the syntax was a bit odd and the usage was not always obvious.
- Better Signal Testings. I like conditions and frequently need to test protocols that use them. Lisp-unit was deficient here
- No easy context control. Databases and the like all had to be handled by wrapping the test body in layers of with-resource-context style macros
- Tests were not compiled till they were run, so compiler errors/warnings often got caught later than needed and were harder to track down than they should be
All of these and more were accomplished in lisp-unit2. (Check out the README for documentation and more details).
I gathered some statistics about which test libraries were being used the most as judged by number of dependencies in quicklisp. The libraries from the CLiki Page on Test Frameworks were cross referenced with quicklisp (because some are defunct). The number listed is the number of systems in quicklisp which require one the testing libraries.
STEFIL (46) - there seems to be two branches
-- HU.DWIM.STEFIL (30)
-- STEFIL (16)
After later analysis, I could probably have gotten by writing extensions to STEFIL, but with my already long-term investment in lisp-unit. I don’t think I spent more time updating lisp-unit than I would have learning STEFIL, writing the extensions I needed and converting all my tests. Hopefully lisp-unit2 fills a need that others have as well.
The most interesting things for me while refactoring were handling the dynamic contexts and using signals to orchestrate output, debugging, results collection etc. These two abstractions combined really nicely to offer all the flexibility I wanted in output and debugging. I think that these abstractions will also allow lisp-unit2 to be highly extensible in an easy to manage way. I also liked that it was easy to write a meta-test-suite for lisp-unit2 in lisp-unit2 using these systems.
The Jam, 817 W University Ave, Gainesville FL, has been my home away from home for a while. The Jam is a local music venue that has a full backline and shared instruments. Anyone of any skill level can jump in and play some music with new friends on one of their open jam nights. They book great shows as well and bring some of the best music to Gainesville (The Wailers, Dopapod, Herd of Watts, just to name a few).
I recently helped them get a wordpress site up and running so that those Luddite-facebookless folks like I can find whos playing when at The Jam. I just wanted to give them a quick link because they are the best kind of folks, and The Jam has changed Gainesville’s music scene immeasurably for the better. Come check them out if you are around.
Last week I had been refactoring some code related to IP Addresses in our internal software at Acceleration.net. In our old code, I came across a pretty speedy ip-address printer that was faster than a naive approach by a good margin. A few days latter Stas Boukarev (stassats) happened to be discussing optimizing this function in #lisp. I sent him our slightly optimized version and after pasting back and forth, by the end of the day he had a very rapid function.
I added a pretty speedy IP address parser and the result of this work is cl-cidr-notation (https://github.com/AccelerationNet/cl-cidr-notation). This provides fast portable functions for reading ip-addresses and cidr blocks from strings into ints and for writing those ints back into dotted-quad notation (184.108.40.206), and cidr blocks into their standard notation (0.0.0.0/30). It can also efficiently print range strings (0.1.2.3-220.127.116.11).
My Common Lisp documentation search engine has been published to http://lisp-search.acceleration.net. In a previous post I wrote about using the montezuma full-text search engine to build an index of documentation available from within my common lisp runtime. I ended up going the extra mile on this one and indexing all of the documentation available for all of the packages in quicklisp (as well as readme files and other packages that sbcl had already loaded). The result is a 90M search index (4M tar.gz) that can be used search through all of the doc strings of all of the easily loadable packages.
The user interface is a bit clunky, searches don’t always return the most relevant results first, but it is live, fast, and seems already useful. Perhaps with some help from the internet, this search engine can reach its full potential. I named the software package that does this manifest-search-web, because it was inspired by gigamonkey’s manifest project. I still have not come up with a reasonable name for the published search engine (lisp-search seems a touch blasé and under-descriptive).
Hopefully, I will never again spend time writing a library only to find the already written, open source alternative after I publish mine. Also, perhaps this will inspire better doc-strings, now that doc-strings might be what leads to someone finding your project.
Other things todo:
- Integrate manifest-search with slime
- Have the documentation index be distributable in quicklisp (not sure how to do that efficiently)
- Find a way to unify CLIKI, l1sp.org, lisp-search and other lisp documentation resources into a more cohesive single website / search
- Improve the query language to ensure that it behaves according to user expectations as opposed to lucene expectations
As always, please report bugs and make suggestions for improvements. Cheers and happy lisping.
Domain Names International (InTrust Domains, DNIDomainMarket.com) has been repeatedly spamming me with emails about domains similar to ones I own. The emails come from various random domains, but when going to the domain, you are immediately redirected to dnidomainmarket.com. On the home page of their website they advertise how the BBB says they can be trusted. This general shadiness makes me not want to follow the opt out link. Also googling makes it fairly obvious that the optout will not work anyway.
I would really suggest that others receiving this spam, click the link below to the BBB and file another advertising complaint against the company. Also reporting to spam cop or similar places.
The BBB complaints section for InTrust Domains
A common complaint from a co-worker is not being able to find relevant library functionality. We have libraries that do some tasks well, but if you haven’t used it before, how are you to know that it is there. More over, how do you find what you are looking for from all of the available utility libraries currently loaded.
After seeing Peter Seibel’s Manifest screencast. I was struck by the idea that you could index all the doc strings to provide a powerful search tool. I dont know about powerful yet, but this idea has turned into at least a search tool: Manifest-Search. This is the product of one days hacking and so should not be construed as the end-all-be-all common lisp search tool, however, it is at least a step in that direction.
I would like to eventually get this integrated more fully with both quicklisp and manifest, but that is all in the future. I think it would be amazing to search for functionality I need, and get documentation for a library I have not yet installed, but is distributed by quicklisp.
In the first released version of access I defined the setf versions as (defun (setf accesses) (new o &rest keys)…). In order to make this work out for plists and alists (where adding a key can result in a new HEAD element), I was forced to return the updated object rather than the “new” value that setf usually returns. I was unhappy with this oddity at the time but didn’t know directly how to fix it (obviously some macrology was in order to capture the “place” being modified).
Today I looked into the docs for define-set-expander and saw how to transform my code into “correct” setf’s. To do this i transformed my previous setf functions into set-access and set-accesses which return (values new-value possibly-new-object). I then define my setf expanders in terms of calling those functions and setting the place passed in to possibly-new-object. It took a little while to figure out and I’m still not entirely sure I wrote the optimal common lisp for this. However I was able to elide the outer setf from these expressions in the tests (setf pl (setf (access pl ‘one) ‘new-val)) and now the new (setf (access pl ‘one) ‘new-val) returns ‘new-val as would be expected.
There were some requests for more, better examples of where access might be useful:
- My html components have a plist representing direct html attributes. I update these with (setf (accesses ctl ‘attributes ‘name) “myFormName”) and its correlary (accesses ctl #’attributes :name). Note that both forms work even though one uses a local symbol and one a keyword (they are compared by symbol-name so that I can think about it less). Also I am ok referring to the attributes function by name or function object (both will result in calling the attributes function on ctl).
- Another example from the web domain: I often store a reference to a database object on the control that is responsible for displaying it. Thus getting the database primary key off of the data for a control can be (accesses client-form ‘data ‘adwolf-db:accountid). This allows me (where useful) to ignore the difference between an unsaved, new object and an object that hasn’t been created yet (for things like putting the id in the url, the difference is irrelevant).
- While not currently implemented this way, my group-by library which groups items into nested alists or hashtable could potentially use access to handle the different implementations
- Printing my database objects in debug / log messages, I want to output some columns (but only if the database object has those). This way I can define one printer for all my db objects with a minimum of fuss
(defmethod print-object ((o clsql:mssql-db-object) (s stream))
"Print the database object, and a couple of the most common identity slots."
(print-unreadable-object (o s :type t :identity t)
(iter (for c in '(id accountid serviceid transactionid title amount name))
(for v = (access o c))
(when v (format s "~A:~A " c v)))
In general I find access useful whenever I need to operate on some set of keys that may or may not exist in a dictionary-like object and I don’t care to receive any errors related to missing keys.
- A keys / values interface to ease arbitrary dictionary iteration would be a worthy addition (alexandria seems to have all the relevant functions implemented, so it would mostly be a dispatch to those)
- When a dictionary doesnt exist, there should be someway of telling it how to create that dictionary (currently you will get a plist).
- Extensibility to allow support for other dict like structures.
Access is a common lisp library I just culled out of our immense utility mud ball and refactored into a library all its own. Access makes getting and setting values in common data structures support a single unified api. As such you could access a specific key from an alist stored in a hashtable stored in the slot of an object as (accesses o ‘k1 ‘k2 ‘k3). It also supports setting values (setf (accesses o ‘k1 ‘k2 ‘k3) “new-val”). Obviously there are some limitations to this approach, but for me, with my coding conventions, I don’t tend to run into them (see the README for details).
Access has removed some of my need for forms like (awhen a (awhen (fn1 it) (fn2 it))) with (access a ‘fn1 ‘fn2). To me, it allows me to more accurately express what I am trying to do while ignoring the vagaries of shifting implementation details. It also eases setting values in nested objects because it handles propagating the value up the chain rather than me having to do that myself (ie adding a new key-value pair to a the front of an alist stored in an object, automatically saves the new resulting alist in the object). I don’t expect that this is tasteful coding, but it is easier and allows me to not get mired down trying to decide if I want it to be an alist, plist, hashtable, or object because the cost to change it later is essentially zero.
Performance is rarely in issue in the apps that I tend to write. However, if it were, I would not use access as it does significant type and dispatch analysis that could be avoided by using the specific access functions of the data structure I am using.
I moved all of my Common Lisp github projects from my personal github page to the new AccelerationNet github organization. Sorry for any inconvenience.
One of our many wordpresses was not allowing you to crop images. I tracked this down to the image failing to load which in turn was caused by an extra \r\n preceding the image content. This extra line-break is caused when an included php file ends in ?>\r\n. Because php writes any content outside of a php tag to the output stream, this causes an extra newline to precede any other content you might have been trying to send (such as a jpeg image). This can cause all sorts of problems, in this case corrupting the JPEG output.
To fix this problem I investigated how to get grep to search in multiline mode (install pcregrep). I then had the trouble that $ matches end of line rather than end of file. After some googling I found that \z will match end of file, and with that I was off to the races. This pcregrep expression will allow you to find php files with pesky trailing space issues.
pcregrep -Mri --exclude_dir=.svn --exclude_dir=css '\?>\s+\z' wp-content/plugins
The offending plugin in my case was an older version of wp-e-commerce (which is not easily upgradeable). After finding all the files with trailing whitespace and removing it, I could now crop images in wordpress again.