Stability & Uptime
The CodeREADr platform has an uptime of more than 99.9%. In other words, our downtime is less than 4.38 minutes/month on average. We understand that your business depends on the uptime of our servers. Thus, we go to great lengths to keep them available and fast.
User Authentication & Permissions
Whether you’re on the website or the app, CodeREADr requires you to go through an authentication process. The website account holder (admin) must give all of the mobile app users unique usernames and passwords. Then, the admin can set specific permissions for each user. In this way, users have access to only what they need.
Encrypted App Communication
When data travels from your mobile device to the our servers it is securely encrypted via TLS. This means that all of the data within your scans, such as its service type, user, device and location, go through this cryptographic protocol. This ensures that your data is safe, secure, and only accessible to an administrator with a valid username and password to the website.
Encrypted Website Communication
The login to CodeREADr.com is also encrypted by TLS. This means that all of the data transferred between us and their web browser when your admin is viewing or downloading scans is encrypted. Also, just in case someone forgets to type in the “s” , we always redirect browsers from http:// to https://. Thus, any authentication, login, or view of data between us is secure.
Encrypted API Communication
The APIs you can call to retrieve data or configure CodeREADr in the cloud are also encrypted by TLS. The API utilizes token-based authentication and IP filtering to ensure it is only your server that is connecting to your information. You are able to revoke and reset your API keys at any time.
Data at Rest Encryption
In addition to encrypting data while ‘moving’ (app-to servers, servers-to app, browser-to-web services, etc.), data stored on our servers is also encrypted. Typically, we refer this to ‘Data at Rest Encryption’.
Replicated & Redundant Databases
With CodeREADr, you should never worry about losing data. First, our database highly scalable. Also, something happens to the main database server, it synchronously replicates the data across multiple data centers. Thus, if one database server goes down, we have others waiting in standby mode to immediately take its place. These backup databases support an automatic failover feature. Therefore, the system will switch from the failed database to the duplicate database without human intervention. Ultimately, there is no waiting for our IT guy to repair the connections.
Continuous Data Backup
First, we backup snapshots of your data so that should a disaster occur, we can restore information from any point in time, any second, from the past three days. Secondly, we store weekly and monthly backups of our databases. Also, we backup everything except barcode images which can simply be re-generated.
Database Technical Details:
We replicate our database across multiple data centers and support automatic failover. Therefore, if there are any problems with our primary database server, we will switch over to a replicated database without human intervention. Also, we able to apply patches and updates the database software without any downtime.
We conduct maintenance via the following steps:
- Perform maintenance on standby
- Promote standby to primary
- Perform maintenance on old primary, which becomes the new standby.
We continually back up our database, storing the backups for a defined retention period of 30 days. We support both Point-In-Time-Restore and Snapshot Restore. Our Point-In-Time-Restore allows us to specify any minute (except the previous 5 minutes) during the past 30 days and restore the data. Also, we perform automatic full daily snapshots of our Database and retain these copies for a month.
Moreover, we utilize an automated replication service for our file system. The service stores data in multiple facilities and on multiple devices within each facility. To increase durability, this system synchronously stores data across multiple facilities at the time of file creation. Also, the service calculates a checksum on all network traffic to detect corruption of data packets when storing or retrieving data.