CDR Tickets

Issue Number 3748
Summary CDR Service: SSL-based Web Service Spike Solution
Created 2014-03-25 13:34:59
Issue Type Task
Submitted By chengep
Assigned To Kline, Bob (NIH/NCI) [C]
Status Closed
Resolved 2014-04-16 12:23:23
Resolution Fixed
Path /home/bkline/backups/jira/ocecdr/issue.121454
Description

Background/Issue Description:
The Information System Security Office (ISSO) indicated that the CDR Service currently does not encrypt traffic from the XMetal client and CDR Loader to the CDR Service. One possible remediation is to re-implement the CDR Service as an SSL based web service. This requires a high LOE due to the complexity of the CDR Service and compatibility of the underlying technology. Once this SSL-based web service is implemented, the endpoints can undergo security scanning to show NIH/IRT that the system is secure.

Task Description:
We will prototype a solution for re-implementing the CDR Service as an SSL based web service. This is related to integrating with the NIH AD task in that we need to make sure NIH credentials will not be sent over the Internet in clear text. The lessons learned from the prototype will further inform the feasibility of re-implementing the CDR Service as an SSL based web service.

Comment entered 2014-04-16 12:23:23 by Kline, Bob (NIH/NCI) [C]

Instead of re-implementing the CDR Server to handle traffic on port 443 directly (which would have precluded having IIS handle the CDR Admin requests on that port, in addition to requiring significantly more effort than the approach I chose), I have prototyped a wrapper which tunnels CDR client-server requests coming in on port 443, passing off the requests to the CDR Server locally over port 2019 (as we have always done), and returning the CDR Server's response back over the 443 connection. Under this model, localhost CDR client-server communications would continue using the custom port 2019, without encryption. All CDR client-server requests from other machines would come in as HTTPS requests (thus encrypted) over port 443. This would allow CBIIT to block port 2019 requests from all hosts except localhost.

There are two server scripts used for the prototype, one implemented in Python, and the other using ASP.NET.

This change will introduce a performance penalty. The increased response time is most noticeable for very large requests. When a CDR document with a large blob is stored from XMetaL, the elapsed time before the request completes is currently around six seconds for the largest files (approximately 70-80 MB). This time increases to around 40 seconds using the HTTPS tunneling technique, without much difference in performance between the Python tunneling script and the ASP.NET script. When retrieving the same blob using XMetaL, the delay is currently so short as to be almost unnoticed. Using the HTTPS tunneling takes up to 40 seconds for this direction as well (again, without any significant difference for the two tunneling scripts). For the more common shorter requests, the Python tunneling script take a little under twice as long as the ASP.NET script (.496 seconds/request versus .275 seconds/requests when using my small test request to list all of the CDR document types). The ASP.NET script doesn't take much longer than the time to submit the requests over port 2019 on the same host.

While the increased delay for posting and retrieving documents with larger attachments would be unfortunate, this drawback is somewhat mitigated by the fact that such requests are much less frequent than smaller requests. Tests with the most common requests (saving/retrieving documents without large blob attachments), confirms that the tunneling approach would not introduce significant performance penalties, particularly if we use the ASP.NET tunneling wrapper, though the Python wrapper also performs well).

If it is decided that the performance penalty is acceptable, it would be necessary to subject this approach to intensive testing to ensure that it performs correctly under normal working conditions and load. This would involve testing by users, as the publishing system (which is normally used for heavy-duty testing) would not be affected by this change, since all of its communications with the CDR Server are on the local host.

  • R12607 /branches/Ampere/Inetpub/wwwroot/cgi-bin/cdr/https-tunnel.ashx

  • R12600 /trunk/Inetpub/wwwroot/cgi-bin/cdr/https-tunnel.py

  • R12542 /trunk/Inetpub/wwwroot/web.config

I'm marking this ticked as 'resolved' since all it requires is a proof-of-concept prototype, not a full-blown test and deployment.

Elapsed: 0:00:00.002480