geom gate network

Chad J. Milios chad at ccsys.com
Sat Oct 18 13:30:24 UTC 2014


> On Oct 17, 2014, at 8:04 AM, Sourish Mazumder <sourish at cloudbyte.com> wrote:
> 
> Hi,
> 
> I am planning to use geom gate network for accessing remote disks. I set up
> geom gate as per the freebsd handbook. I am using freebsd 9.2.
> I am noticing heavy performance impact for disk IO when using geom gate. I
> am using the dd command to directly write to the SSD for testing
> performance. The IOPS gets cut down to 1/3 when accessing the SSD remotely
> over a geom gate network, compared to the IOPS achieved when writing to the
> SSD directly on the system where the SSD is attached.
> I thought that there might be some problems with the network, so decided to
> create a geom gate disk on the same system where the SSD is attached. This
> way the IO is not going over the network. However, in this use case I
> noticed the IOPS get cut down to 2/3 compared to IOPS achieved when writing
> to the SSD directly.
> 
> So, I have a SSD and its geom gate network disk created on the same node
> and the same IOPS test using the dd command gives 2/3 IOPS performance for
> the geom gate disk compared to running the IOPS test directly on the SSD.
> 
> This points to some performance issues with the geom gate itself.
> 
> 
> Is anyone aware of any such performance issues when using geom gate network
> disks? If so, what is the reason for such IO performance drop and are there
> any solutions or tuning parameters to rectify the performance drop?
> 
> Any information regarding the same will be highly appreciated.
> 
> -- 
> Sourish Mazumder
> Software Architect
> CloudByte Inc.

What hardware are we talking about, specifically? Systems, NICs, SSDs. To me, the ratios you are describing don't seem that unreasonable. You surely realize you're asking for a lot out of a software solution and comparing it to directly attached hardware. SSDs generally handle a LOT of IOPS. SANs in general are not going to get you anywhere close to direct attached performance without everything in the chain being REALLY expensive. I see IOPS are your main concern but could you also post throughput numbers, to compare and contrast? We need real numbers and real hardware makes/models to get an idea. What block sizes have you tried with dd and what is your baseline direct attached performance? Have you tried iSCSI, either the new in-kernel stack or the old user land tools? Have you compared this to any linux setups on the same system? When you said "create a geom gate disk on the same system" do you mean using ggatel or still using ggated/ggatec? It'd be useful to have both those situations benchmarked for more insight regarding the factors at play.

Is there room for optimization and tweaking within the system as you described it? Probably. To me though my first instinct was that the problem is more likely in your expectations of ggate and TCP. I think iSCSI will get you closer to what you expect, how much closer I'm not sure without trying it out.

And 9.2? That's deprecated, man. Can you use 9.3 or 10.x? :)

I realize you no doubt have real work to perform and don't have all day to benchmark umpteen variations and permutations of what at first glance seems like it should be a simple system. Sorry I couldn't be of more help. Maybe someone else's intuition will bring you a better answer with less headache. I only hope to have shed some light on the many factors at play here.


More information about the freebsd-geom mailing list