Blog

Blog

An introduction to data gravity

A couple of months ago, our President and Co-Founder John Tkaczewski had the honour of presenting an introduction to data gravity at the NAB virtual conference. While the concept of data gravity has been around for a few years, it isn’t talked about very often, especially in relation to our business – file transfer. Coined in 2010 by Dave McCrory, the basic concept of data gravity is that...
Blog

FileCatalyst: What does it do (part 3)

Think of the internet as garden hose. Now let’s say that this hose has a kink and the amount of water flowing through the hose is constricted. By bringing in acceleration technology, the kink is removed and now the flow can be as much as the hose can take. The diameter of the hose will be your bandwidth, the higher the diameter the more data we can push, but we can’t push anymore data than the...
Blog

The effect of digital technology adoption in the oil and gas industry

A recent article by CIO UK points out how digital technology is rising exponentially in the oil and gas sector. From seismic imaging to e-learning, 80% of senior executives in the industry believe digital advancements have revolutionized how they work. The remaining 20% believe digital technology has had a tangible impact. No matter who you ask, it’s evident the adoption is positively affecting...
Blog

A Corporate Cloud-Based Large File Attachment Solution

Looking for a corporate cloud-based large file attachment solution? With file sizes growing and email servers still having file size restrictions, companies are in desperate need of a solution. The options most companies have are: free or paid online service hosted by a third party, in-house file transfer mechanism, or self-hosted solution on the cloud. Going with the free or paid solution...
Blog

The Do’s and Don’ts of Cleaning Coding, Clean Code Series (Part 2)

Sometimes the hardest part of clean code is getting everyone else on board. In my last post on clean coding I talked about why clean code was important, how it saves money in the long run and increases productivity. Now I want to talk a little bit about some of the do’s and don’ts of clean coding. Proper clean code without side effects takes...
Blog

Archives – A Perfect Fit for File Transfer Acceleration

A recent article at tvtechnology.com, discusses the ambiguity of the term “archive,” and uses the Library of Congress archives as an example of how digital media is changing the way films are preserved for future access and use.It was not within the scope of the article to discuss how files are shuttled between locations, but it seems clear that file transfer acceleration would be extremely useful...
Blog

Driving Efficient Video Game Production: 5 Invaluable File Transfer Features

Video game development requires vast amounts of communication and collaboration at every step of the process. As development occurs, teams must share and exchange files within the office, across the country or even around the world. During development, Electronic Arts’ “Battlefield 4” game files were as large as 50GB, making file transfer a...
Blog

NASA’s 622 Mbps Link to the Moon: How to Increase Transfer Speeds for Large Files

In a recent press release NASA announced that now they have a 622 Mbps Laser link to the moon. The press release also mentions that NASA was also able to transfer data at 20Mbps from earth to the space craft via this link. I’m wondering what effective rate was NASA getting when transferring data on this link? The data should be transferring at the theoretical limit of 622Mbps. Data Throughput...
Blog

OpenPGP, PGP, and GPG: What is the Difference?

The privacy capabilities of encryption methods such as Pretty Good Privacy (PGP) allow organizations to achieve a hightened amount of data security and protection. There are various approaches and elements of comparison for these encryption methods, however, and each one comes with their own histories, features, and capabilities. These are: PGP,...
Blog

Prioritizing Transfers in FileCatalyst Direct

FileCatalyst routinely helps users make full use of your bandwidth for file transfer. Scenarios in which bandwidth is optimized include cases where certain file transfers have higher priority than others, or when files being exchanged with a particular customer are more important than files exchanged with others. FileCatalyst Direct offers up a number of controls in its FileCatalyst HotFolder and...
Blog

REST Development: Why HTTP Status Codes are Important

There are many ways in which REST developers can tackle error handling. Most REST services will send some kind of error condition structure which embeds an error message describing the error and some kind of code. This is a good start. However, for some REST services the HTTP status code is not well defined. In some cases, the REST services send an “OK” status code of 200 regardless of whether an...
Blog

Sending large media files: How free services stack up

Ever run into issues sending large media files via email? The answer is probably yes, especially if you work in a field which requires the transfer of large format video or audio files on a daily basis. While email is great for many types of workplace communications, file size limitations (usually around 10 to 25MB) make it next to impossible to...
Blog

Java and Windows 7 Services

I recently had an interesting support case that I thought deserved to be shared with everyone. Essentially the customer was unable to run FileCatalyst Server as a service on Windows 7 Premium with the Windows firewall up. After attempting several things we discovered that it was a permissions issue with Java and Windows 7 services. To fix this I edited the fcconf.conf file as an administrator and...
Blog

Comparison of web-based file transfer methodologies

Transferring large files over the internet has never been a simple task. Anyone that has ever tried to transfer a file larger than 100MB can vouch for the slow transfer speeds, multiple disconnects, data corruption, complexity of the task and security issues surrounding FTP. The same problem exists for web developers trying to implement web-based file transfer functionality. Ideally, the end user...
Blog

REST – HTTP POST vs. HTTP PUT

When to Use HTTP Post vs HTTP PUT vs HTTP PatchThere seems to often be some confusion as to when to use the HTTP POST versus the HTTP PUT method for REST services. Most developers will try to associate CRUD operations directly to HTTP methods. I will argue that this is not correct, and one cannot simply associate the CRUD concepts to the HTTP methods. That is:Create => HTTP PUTRetrieve =>...
Blog

FileCatalyst IBC 2012 Wrap-up

FileCatalyst exhibited at the IBC (International Broadcasting Convention) show in Amsterdam, the expo portion having run from September 7–11. It is one of the biggest shows of its type in the world, and I always find it interesting to be a part of the team in attendance. With IBC 2012 behind us, here are some parting thoughts: It was hectic, and I mean that in the best way. This year’s attendance...
Blog

Why hardware solutions can never truly replace software for file transfer acceleration

In the world of file transfer acceleration, there are multiple approaches using both WAN Optimization appliances or pure software solutions like FileCatalyst. Hardware sometimes give the impression of perceived value; there is a tangibility that you just don’t get with software. But can a hardware solution on its own really replace software for file transfer acceleration? The answer is no. ...
Blog

Open Source Fast File Transfers

There exist a number of open source projects trying to tackle accelerated file transfer via UDP. Some solutions are more mature than others and also use different technologies to solve the same problem of large data transfer over WAN. This article should provide the reader enough information to compare the different solutions and gauge if an open source project could be used instead of purchasing...
Blog

Analysis On Improving Throughput Part 2: Memory

The life cycle of a file transfer follows this basic pattern: The first and last step in the diagram, Disk IO, were covered in Part 1 of the series: Improving Throughput Part 1: Disk IO. Disk IO is always a good place to start when analysing a system to see why files are not transferring fast enough. In the 2nd article in...
Blog

Analysis On Improving Throughput Part 1: Disk IO

This post is the first of a series and continues in Part 2: Memory In light of the release of FileCatalyst Direct v3.0, I thought I’d write a few articles about the road to achieving 10Gbps speeds. It seems to me the best place to start is with the endpoint of the transfer: the storage media. Why? Before we even think about the file transfer protocol, we have to be sure that our disks can keep up...