This is beautiful. A thread about acquaintance spam spiraled off into a furious bout of invective and harsh words. It amazes me that people who should know better don't. Now it may be true that somebody "started it", but at least they had the tact and intelligence to back off and not involve themselves in the comment war that ensued.
I just found it hilarious that a supposedly high signal/noise ratio medium (blogging/RSS/contemporary "collaboration" tools) can be reduced to a tangled USENET thread in less time than it takes to click on the "BlogThis" bookmarklet in your toolbar.
Wednesday, March 30, 2005
Saturday, March 26, 2005
Python typing part II
Again, I still don't understand why we really need this type checking stuff. It seems to me that since we are adding type checking that will be done at runtime anyway the whole thing is a bit of a moot point. I may be missing something critical here but AFAIK the BDFL is not considering modifying the compiler/interpreter to handle type-safety. Why not just have a library with decorators that allows you to do type checking?
I've hacked together a strawman typechecking decorator that I would appreciate comments and flames on. I'm just trying to get a grip on what the issue with keeping the language small and consistent is.
I've hacked together a strawman typechecking decorator that I would appreciate comments and flames on. I'm just trying to get a grip on what the issue with keeping the language small and consistent is.
Friday, March 25, 2005
Static typing and Interfaces in Python
OK. So here's the obligatory weigh-in on static typing in Python. I don't see why we need to introduce a new syntax fior something that could be handled very cleanly with decorators (and I'm sure many people have argued this point).
I also choke on the whole "Interface" thing. We don't need those either. Just create a class with a bunch of
One of the things I like most about python is its consistency: things usually work the same way. Let's keep it nice, simple and predictable.
On a slightly related note I see that the BDFL is considering wiping out
I also choke on the whole "Interface" thing. We don't need those either. Just create a class with a bunch of
raise 'Not Implemented'
statements for all the method implementations. Anybody subclassing the class would be forced to implement the methods. We could even have a meta-class for it that checks that all the methods are implemented - again, no need for extended syntax.One of the things I like most about python is its consistency: things usually work the same way. Let's keep it nice, simple and predictable.
On a slightly related note I see that the BDFL is considering wiping out
map
, filter
and reduce
. I applaud this since the list comp syntax neatly handles these functions' roles.
Friday, March 11, 2005
Wednesday, March 09, 2005
Reading the fine print
Reading the printer-friendly version of articles removes distractions.
To my great chagrin a number of signal-heavy sites now have ads scattered all over their articles, breaking up the flow and generally getting in the way. Not to mention the Flash ads that whirl and blink or the DHTML ads that seem to want to take over the entire screen. I'm looking at you IBM, and ITworld.com and cnet.com and all the others that for years have had great technical/business/whatever-you-want articles but who now waste my time and visual bandwidth with their crappy, non-targetted ads.
No more.
From now on I will be reading only the "printer-friendly" version of these articles. This version contains no ads (well, there may be some but they are usually at the top or bottom) and in general has a better layout since they have removed the ugly navigation that these providers have managed to plaster all over 2/3 of my screen.
On a related note, check out Greasemonkey - a firefox extension that allows you to run custom DHTML for each site you visit. By default this extension ships with a script that disables Google ads. When I say disables, I actually mean hides since the original GET to the server actually returns the HTML page with the ads built in but Greasemonkey strips it out on the client side. So your favourite blogger is still reaping the benefits of the ad bar but you don't have to suffer the indignity of giving up screen real-estate.
To my great chagrin a number of signal-heavy sites now have ads scattered all over their articles, breaking up the flow and generally getting in the way. Not to mention the Flash ads that whirl and blink or the DHTML ads that seem to want to take over the entire screen. I'm looking at you IBM, and ITworld.com and cnet.com and all the others that for years have had great technical/business/whatever-you-want articles but who now waste my time and visual bandwidth with their crappy, non-targetted ads.
No more.
From now on I will be reading only the "printer-friendly" version of these articles. This version contains no ads (well, there may be some but they are usually at the top or bottom) and in general has a better layout since they have removed the ugly navigation that these providers have managed to plaster all over 2/3 of my screen.
On a related note, check out Greasemonkey - a firefox extension that allows you to run custom DHTML for each site you visit. By default this extension ships with a script that disables Google ads. When I say disables, I actually mean hides since the original GET to the server actually returns the HTML page with the ads built in but Greasemonkey strips it out on the client side. So your favourite blogger is still reaping the benefits of the ad bar but you don't have to suffer the indignity of giving up screen real-estate.
This is so wrong!
Now I'm not usually a namby-pamby "love the animals" kind of guy; and I've eaten (and enjoyed) my fair share of rabbits, but this is just too much.
Introducing the main idea
Start each article/post with a single phrase that briefly describes the idea being communicated.
I've been reading Dave Pollard's excellent weblog for a little while now and I've noticed that at the beginning of each article he writes "The Idea" followed by a description of what he wants to communicate in the article.
This is a great tool for focussing the contents of an article and it also allows readers to read the full story with the basic idea in mind. It provides a sort of scaffolding for the rest of the article so that additional ideas can glom onto the original one to create a (hopefully) coherent whole.
I've been reading Dave Pollard's excellent weblog for a little while now and I've noticed that at the beginning of each article he writes "The Idea" followed by a description of what he wants to communicate in the article.
This is a great tool for focussing the contents of an article and it also allows readers to read the full story with the basic idea in mind. It provides a sort of scaffolding for the rest of the article so that additional ideas can glom onto the original one to create a (hopefully) coherent whole.
Monday, March 07, 2005
Automatic filing with del.icio.us
From Otaku, Cedric's weblog: Automatic filing with del.icio.us I made a couple of modifications to allow the launch to happen in another window. Drag this bookmarklet to your toolbar to apply the toread tag to a page.
Cedric also made a delete from del.icio.us bookmarklet.
Cedric also made a delete from del.icio.us bookmarklet.
Wednesday, March 02, 2005
Online medical records - strawman
David Toub writes that the three main obstacles to having online medical records for patients are
Security concerns
It is normal, I think, in this day and age, to worry about the accessibility of your data online. With so many scandals and so much spam and hacking it is easy to become overly anxious about exposing your information to a potentially hostile environment. The answer to this is a liberal application of strong encryption. I'm not talking about weak SSL with it's pansy 128 bit keys (many implementations of which have failed to demonstrate that there is no redundancy in the key bits), I'm talking about public key cryptography combined with some sort of DRM tool that would allow doctor's to access a patient's information once, in the controlled context of a visit.
I recently discovered a feature in PGP that allows you to encrypt a text document to a built-in viewer that is distributed with PGP and that in addition to requiring you to enter the appropriate passphrase for the key also displays the text in a non-copyable, tempest technology resistant window. Something like this could be used so that a patient would be able to authorize a given physician to access their data once (and only that one time). The physician would have full access to the patient's file for the duration of the consultation but afterwards they would be unable (even for their own purposes) to retrieve information other than
The key point here is that it is definately possible to build a system where the patient controls all access to their information. This would also limit physicians' liability in certain situations. For example, staff from a doctor's office would be categorically unable to share any details about patients other than their name (or other identifier) and the fact that they visited. It would even be possible to visit a doctor anonymously. As long as the physician has access to your records, they don't really need to know who you are. This would encourage people to use these records for things that could be traced back to them disfavourably such as abortions, HIV/AIDS treatments and so on.
Cost of a new system
In order for the system described above to really work it would need to be de-centralized and managed by each patient individually. Any attempt to centralize the system would probably open the door for abuse (if for no other reason than administrators of the central system could track usage statistics in a non-anonymous way).
I don't think that physicians would need to pay anything at all to use the system (other than the fees for a computer and an internet connection).
We have seen from systems like blogging, FOAF, bookmarks and so on that privately owned and managed resources can still be shared profitably.
Cost of inputting older records
I don't think this is really a big deal either. The fact of the matter is that right now (most) medical records are maintained by each healthcare institution individually. When a patient goes for X-Rays or for blood tests or any other tests, the testing body needs to send the results of the tests back to the physician. How easy is it really for a given physician to find out everything about the patient that is presenting? Not very, I'll wager. This is mainly due to poor organization and not any malice on anybody's part.
There isn't really a need to input all prior records at once. It would (or should anyway) be enough to input them as they become pertinent to the current activities of the patient.
For example, let's say that a patient had a colonoscopy at the request of their physician last year. Whatever condition prompted the test was resolved weeks after the test. The results of that test (and even to a certain extent the fact that it was performed) is of no import the next year when the patient presents with a broken wrist. In that situation, the physician would simply enter the new information without worrying about the lack of the results of the colonoscopy (never mind the fact that the patient should be able to selectively allow the physician to only access the parts of their record that are pertinent).
If on the other hand the same patient presented with bloody stool and the physician wanted to order a fecal occult blood test they should be (perhaps verbally) made aware of the colonoscopy the previous year. At that point the physician would order the results from the lab (or get them from the patient). So far I think this is how things work today. The difference is that once the test results are obtained the physician would enter the new information into the system.
In this way the system would be built up incrementally over time and would not require a huge amount of up-front investment.
So let's take a look at these items one by one.
- Concerns about security (getting better, but reports of major academic institutions being hacked don't help. Regardless, EMRs are still inherently more secure than paper records)
- Cost (big issue-a small practice just doesn't have $20k to blow on a new system)
- Startup: How to input thousands of paper-based records into an EMR fast and inexpensively
Security concerns
It is normal, I think, in this day and age, to worry about the accessibility of your data online. With so many scandals and so much spam and hacking it is easy to become overly anxious about exposing your information to a potentially hostile environment. The answer to this is a liberal application of strong encryption. I'm not talking about weak SSL with it's pansy 128 bit keys (many implementations of which have failed to demonstrate that there is no redundancy in the key bits), I'm talking about public key cryptography combined with some sort of DRM tool that would allow doctor's to access a patient's information once, in the controlled context of a visit.
I recently discovered a feature in PGP that allows you to encrypt a text document to a built-in viewer that is distributed with PGP and that in addition to requiring you to enter the appropriate passphrase for the key also displays the text in a non-copyable, tempest technology resistant window. Something like this could be used so that a patient would be able to authorize a given physician to access their data once (and only that one time). The physician would have full access to the patient's file for the duration of the consultation but afterwards they would be unable (even for their own purposes) to retrieve information other than
- The fact that the given patient visited them
- The treatment that they provided
- Perhaps (but not necessarily) a general description of the problems that the patient presented with
The key point here is that it is definately possible to build a system where the patient controls all access to their information. This would also limit physicians' liability in certain situations. For example, staff from a doctor's office would be categorically unable to share any details about patients other than their name (or other identifier) and the fact that they visited. It would even be possible to visit a doctor anonymously. As long as the physician has access to your records, they don't really need to know who you are. This would encourage people to use these records for things that could be traced back to them disfavourably such as abortions, HIV/AIDS treatments and so on.
Cost of a new system
In order for the system described above to really work it would need to be de-centralized and managed by each patient individually. Any attempt to centralize the system would probably open the door for abuse (if for no other reason than administrators of the central system could track usage statistics in a non-anonymous way).
I don't think that physicians would need to pay anything at all to use the system (other than the fees for a computer and an internet connection).
We have seen from systems like blogging, FOAF, bookmarks and so on that privately owned and managed resources can still be shared profitably.
Cost of inputting older records
I don't think this is really a big deal either. The fact of the matter is that right now (most) medical records are maintained by each healthcare institution individually. When a patient goes for X-Rays or for blood tests or any other tests, the testing body needs to send the results of the tests back to the physician. How easy is it really for a given physician to find out everything about the patient that is presenting? Not very, I'll wager. This is mainly due to poor organization and not any malice on anybody's part.
There isn't really a need to input all prior records at once. It would (or should anyway) be enough to input them as they become pertinent to the current activities of the patient.
For example, let's say that a patient had a colonoscopy at the request of their physician last year. Whatever condition prompted the test was resolved weeks after the test. The results of that test (and even to a certain extent the fact that it was performed) is of no import the next year when the patient presents with a broken wrist. In that situation, the physician would simply enter the new information without worrying about the lack of the results of the colonoscopy (never mind the fact that the patient should be able to selectively allow the physician to only access the parts of their record that are pertinent).
If on the other hand the same patient presented with bloody stool and the physician wanted to order a fecal occult blood test they should be (perhaps verbally) made aware of the colonoscopy the previous year. At that point the physician would order the results from the lab (or get them from the patient). So far I think this is how things work today. The difference is that once the test results are obtained the physician would enter the new information into the system.
In this way the system would be built up incrementally over time and would not require a huge amount of up-front investment.
Tuesday, March 01, 2005
The future flavour of webservices
You've probably heard about webservices: they're those things that use SOAP, XML-RPC and a whole alphabet soup of technologies better left untouched. Another acronym to know is REST. It is the red-headed stepchild of webservice - the technology that IBM, Microsoft, BEA and all the others don't want you to know about. Why? Because it exists already. Your browser already uses REST to browse the web. Certain tools (like WebDAV) use REST to store and modify resources on the web. And where would we be without POSTs from forms?
SOAP and XML-RPC are both suites that allow you to exchange XML-encoded messages with other hosts on the network. Let's say you want to find all the top scoring players in the NHL. You prepare an XML document that describes your request and you POST it to a specific host that you know will actually respond. The URL you post to is called the "endpoint" and theoretically represents a resource or process. The host returns another (specially prepared) XML document that (once you un-wrap it) contains the list of players.
In REST things are much simpler. All you do is GET the URL. No fancy documents or anything. GET the URL and the server returns the list.
What is interesting however is the difference in viewpoint between what are now being called the WS-* camp (the SOAP and XML-RPC guys) and the RESTafarians.
The RESTafarians are very focussed on the semantic nature of URLs, often talking about the relationship between URLs and the resource representations they refer to. WS-* guys often talk about how the latest envelope spec can be enhanced.
In case you're wondering, I'm pretty much sold on RESTafarianism although I do like XML-RPC since it has a much simpler design/interface than SOAP.
Here is a very cogent introduction to the principles and philosophies of REST: http://naeblis.cx/rtomayko/2004/12/12/rest-to-my-wife
Here is the home of XML-RPC: http://www.xmlrpc.com
Here is the home of the SOAP working group: http://www.w3c.org/2000/xp/Group/
SOAP and XML-RPC are both suites that allow you to exchange XML-encoded messages with other hosts on the network. Let's say you want to find all the top scoring players in the NHL. You prepare an XML document that describes your request and you POST it to a specific host that you know will actually respond. The URL you post to is called the "endpoint" and theoretically represents a resource or process. The host returns another (specially prepared) XML document that (once you un-wrap it) contains the list of players.
In REST things are much simpler. All you do is GET the URL. No fancy documents or anything. GET the URL and the server returns the list.
What is interesting however is the difference in viewpoint between what are now being called the WS-* camp (the SOAP and XML-RPC guys) and the RESTafarians.
The RESTafarians are very focussed on the semantic nature of URLs, often talking about the relationship between URLs and the resource representations they refer to. WS-* guys often talk about how the latest envelope spec can be enhanced.
In case you're wondering, I'm pretty much sold on RESTafarianism although I do like XML-RPC since it has a much simpler design/interface than SOAP.
Here is a very cogent introduction to the principles and philosophies of REST: http://naeblis.cx/rtomayko/2004/12/12/rest-to-my-wife
Here is the home of XML-RPC: http://www.xmlrpc.com
Here is the home of the SOAP working group: http://www.w3c.org/2000/xp/Group/
Subscribe to:
Posts (Atom)