Connectivity implementations

We can talk about connectivity and interoperability until the cows come home, but if we don’t have a technical solution for how to actually perform the information transfer, we won’t go very far.

The kinds of technical solutions I am talking about are things like automatic HTTPS POST/GET  transfers, FTP drop-boxes, (gasp) email or any other number of proprietary solutions. Protocol standards like HL7 and others are a level above this discussion.

I don’t have intimate knowledge of the inner-workings of other EMR/EHR  products, so I can only speak for our product. The focus of Ankhos is on office workflow and chemotherapy documentation, not file storage and retrieval. As such, Carolina Oncology Specialists are also implementing an EMR product  for larger-scale file storage. If Ankhos were to generate a patient visit report, it would be necessary to push that report to this archive.  Another source of medical data in the office is the lab equipment. These lab machines currently dump their values via ftp to a pre-defined server on the office intranet. Storing these lab values in this archive and accessing them from Ankhos is a necessity.  Let’s talk about specifics:

How do we distribute up-to-date lab values and status reports to separate applications in an efficient manner while maintaining modularity?

(S)FTP Drop:

The solution currently planned is simply an ftp drop. The archival program and Ankhos willl poll an ftp folder for file changes and ingest accordingly. I believe this solution is OK for the short term, but as more sources of data and output paths are added, keeping track of this meta-data becomes painful and error-prone. Another drawback to this solution is the fact that the data in each application can not be guaranteed to be up-to-date.

HTTP(S) Push:

The benefit of this solution is that the data in each aspect of the platform is always up-to-date (disregarding transfer time). The problem is that it requires much more configuration than an FTP drop. Parameters must be standardized and the distribution server would still have to keep track of it’s listeners. If each application were allowed direct transfer to another application, cycle detection would be critical.

It seems like neither of these solutions is optimal. Either they are too complex or don’t guarantee up-to-date information without frequent polling over the network.

Proprietary solutions:

??? If an EMR product vendor has a solution for their internal architecture, great! But it doesn’t help people who are not using their product.  Come back when you actually want interopability.

Mixed solution?:

Perhaps  we can think of a solution that mixes the best aspects of both options.  One might be to have a data server which is uploaded to much like an FTP server. Client applications can register themselves via an http(s) request and tell this server

1. what types of files it wants to be notified about

2. where it can be contacted

3. what variables to use in that communication.

One application might subscribe to cbcs, radiology images and vitals from a data server with:


('resp': 'app1.ip.address',
'data_source':  ['cbc'],
'param_name' : 'incoming_action',
'value' : 'cbc_incoming')


('resp' : 'app1.ip.address',
'data_source': ['rad_img'],
'param_name' : 'incoming_action',
'value' : 'rad_img_incoming')


( 'resp':'app1.ip.address',
'data_source':['vitals''],
'param_name':'incoming_action',
'value':vitals_incoming')

and a second app may only care about cbcs and radiology images (notice, the developers of app2 use different naming conventions, but the data server would not care:

(
'response_addr' : 'app2.ip.address',
'data_source': ['rad_img'],
'param_name' : 'inc_act',
'value' : 'rad_img_inc')

These items represent all that is needed to send a message to a listening app to notify it  that something it has subscribed to has changed.

This would

1. Move the constant polling from over the network to within the data server,

2. Eliminate the need for the server to implement all possible interfaces to different EMR applications

3. Factor out or unify any file naming conventions that might be used internally by a data server.

4. Eliminate cycling woes because there would be only one data source.

Is this solution obvious? Has it been implemented in the EMR product community? What are some other ideas for ‘pushing’ data in a timely and efficient manner? What scheme of transfer to external applications does your EMR use?

Tags: , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: