Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 davidbierce joined #gluster
00:03 davidbie_ joined #gluster
00:04 rwheeler joined #gluster
00:34 rwheeler joined #gluster
00:40 hagarth joined #gluster
01:25 fidevo joined #gluster
01:25 kodiakFiresmith joined #gluster
01:26 _ndevos joined #gluster
01:30 wushudoin joined #gluster
01:52 davidbierce joined #gluster
01:54 DV__ joined #gluster
01:56 harish joined #gluster
01:57 Fresleven joined #gluster
02:21 djgiggle joined #gluster
02:22 djgiggle Hi, I'm testing glusterfs on 2 virtual machines and find that the 2 servers are not syncing. Files created on one server initially gets replicated onto the 2nd one, but doesn't do so anymore. Any reason why?
02:29 failshell joined #gluster
02:32 hagarth djgiggle: are you writing from a mount or writing to bricks directly?
02:33 djgiggle hagarth: um, I'm just writing to the directory using a python script as well as using "touch filename"
02:34 djgiggle and I did it on one of the servers instead of on a client machine
02:35 smellis joined #gluster
02:35 smellis anyone done back to back with cheap ebay ib cards?
02:37 hagarth djgiggle: you would need to write from a volume that is mounted for replication to happen
02:37 hagarth and the volume has to be a replicated one.
02:37 djgiggle hagarth: as far as i know, I did so. initially the files got replicated. after a day, it didn't.
02:38 Fresleven_ joined #gluster
02:39 hagarth djgiggle: this might be of help - http://www.gluster.org/community/documentation/index.php/Getting_started_rrqsg
02:39 glusterbot <http://goo.gl/uMyDV> (at www.gluster.org)
02:39 hagarth djgiggle: i will bbiab
02:39 djgiggle hagarth: ok
02:39 bharata-rao joined #gluster
02:43 zwu joined #gluster
02:47 andreask joined #gluster
02:51 sgowda joined #gluster
02:53 JoeJulian elico: interesting tidbit I found for a completely unrelated reason, but it's rather apropos: "Each directory on a normal Unix filesystem has at least 2 hard links: its name and its  ‘.’ entry. Additionally, its subdirectories (if any) each have a ‘..’  entry linked to that directory."
02:53 shubhendu joined #gluster
03:02 diegows_ joined #gluster
03:07 ppai joined #gluster
03:16 hagarth joined #gluster
03:18 djgiggle hagarth: got it, thanks. turns out i was writing directly to the brick, which i suppose is something i'm not supposed to do.
03:23 hagarth djgiggle: yes, bricks are sacred; nobody should touch them apart from gluster processes :)
03:23 djgiggle hagarth: i have learnt. :) thanks again.
03:23 djgiggle so even on the servers, if i want to access the files, i should always create a mount point, right?
03:23 hagarth djgiggle: yes, that is the recommended method.
03:24 djgiggle got it!
03:35 itisravi joined #gluster
03:36 RameshN joined #gluster
03:52 aliguori joined #gluster
04:00 plarsen joined #gluster
04:04 mohankumar joined #gluster
04:05 ababu joined #gluster
04:09 hagarth joined #gluster
04:16 DV__ joined #gluster
04:18 Skaag` joined #gluster
04:19 elyograg_ joined #gluster
04:29 wgao joined #gluster
04:30 X3NQ joined #gluster
04:32 shruti joined #gluster
04:46 psharma joined #gluster
04:59 elyograg_ I'm emailing gluster-users with info about our rebalance problems and the request for a consultant.
05:08 hagarth joined #gluster
05:09 vpshastry joined #gluster
05:10 davidbie_ joined #gluster
05:12 ababu joined #gluster
05:19 hagarth joined #gluster
05:20 psharma joined #gluster
05:31 raghu joined #gluster
05:41 rjoseph joined #gluster
05:44 kshlm joined #gluster
05:44 bala joined #gluster
05:53 CheRi joined #gluster
06:01 ninkotech__ joined #gluster
06:02 bstr joined #gluster
06:02 kanagaraj joined #gluster
06:16 nshaikh joined #gluster
06:16 DV__ joined #gluster
06:23 purpleidea elyograg_: saw that. redhat is a good company to get support from if you can't find the solution in gluster-users.
06:24 ndarshan joined #gluster
06:24 psharma joined #gluster
06:27 rastar joined #gluster
06:29 mjrosenb ok, I think I've asked this before, but what does .glusterfs do on a dht setup?
06:30 bharata-rao joined #gluster
06:43 raar joined #gluster
06:45 vshankar joined #gluster
06:45 shri_ joined #gluster
06:49 shri_ shruti: ping -- need help
06:51 Shri joined #gluster
06:51 shri_ shruti: Need Help for glusterfs + openstack + libgfapi ..
06:57 twx joined #gluster
07:00 ricky-ticky joined #gluster
07:00 ninkotech__ joined #gluster
07:00 ninkotech joined #gluster
07:03 vimal joined #gluster
07:04 aravindavk joined #gluster
07:04 shruti shri_, hi, I am not really sure about that.
07:06 lalatenduM joined #gluster
07:06 shri_ shruti: I want to use libgfapi in openstack..
07:06 saurabh joined #gluster
07:07 shri_ shruti: openstack will mount the glusterfs so don't want that Fuse overhead so Do you is there any way  to use libgfapi ?
07:07 shri_ shruti: Do you know is there any way to use libgfapi .
07:08 satheesh joined #gluster
07:08 JoeJulian shri_: afaik, you'd have to modify the libvirt template - at least that's how I guess it would have to be done. I haven't actually tried yet.
07:08 ngoswami joined #gluster
07:12 samppah shri_: iirc there is patch for openstack to use libgfapi and it should be included in havana
07:13 samppah just notice that if you are using rhel, or some other distro based on it, it doesn't have support for libgfapi in qemu / libvirt yet
07:14 shri_ shruti: thanks I will try check that...
07:15 shri_ samppah: Havana.. which released on 17-oct ..right ?
07:15 samppah shri_: yes
07:15 samppah https://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means/
07:15 glusterbot <http://goo.gl/dXAYqz> (at www.mirantis.com)
07:15 JoeJulian I wonder if that's true for RDO
07:16 samppah that's a good question.. i have been following libgfapi support for ovirt and i'm kind of confused about what's going on
07:16 shri_ samppah: because what I have seen by default openstack will mount the gluster FS..
07:17 samppah shri_: does it also use vm images through it?
07:17 shri_ samppah:  JoeJulian: I did changed related cinder in cinder.conf, glusterfs_shares_config..
07:18 shri_ samppah: JoeJulian: thnx for help .. I will try above mentioned things
07:18 bulde joined #gluster
07:19 ninkotech joined #gluster
07:19 shri_ samppah: my goal is to create cinder volume on glusterfs using libgfapi (don't want fuse overhead)
07:19 ninkotech__ joined #gluster
07:19 samppah shri_: what distribution you are using?
07:19 shri_ samppah: Yes I found cinder volume get created on glusterfs mnt pt
07:20 shri_ samppah: I'm trying with Fedora19
07:20 samppah okay
07:20 shri_ samppah: Is there any known issue with Fedora19 + libgfapi for openstack ?
07:21 samppah shri_: not that i have heard of.. JoeJulian do you have any idea?
07:21 samppah i'm not currently running openstack though
07:21 psharma joined #gluster
07:22 shri_ samppah: JoeJulian- I did changed in cinder.conf so cinder/openstack will use libgfapi .. but found that openstack will mount the gluster FS
07:22 shri_ and not use libgfapi
07:24 samppah afaik it's fine that openstack mounts the volume but it should start VM's with someting like qemu-kvm -drive gluster://glusterServer/volName/vm.img
07:25 shri_ samppah: when I try to launch Instance in openstack.... it throws some error and goes into Error state
07:26 samppah can you send that error to pastie.org ?
07:26 jtux joined #gluster
07:26 shri_ samppah: It through below error when I launch Instance --
07:26 shri_ ----
07:26 shri_ Error: Failed to launch instance "vm1": Please try again later [Error: Remote error: ProcessExecutionError Unexpected error while running command. Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order --bind-interfaces --conf-file= --pid-file=/opt/stac].
07:26 shri_ -------
07:26 shri_ samppah: something related to nova.confg and network
07:28 5EXAAQCSH joined #gluster
07:28 77CAAJ725 joined #gluster
07:29 shri_ samppah: added below thing in nova.confg for network related interface
07:29 shri_ --
07:29 RameshN joined #gluster
07:29 shri_ HOST_IP_IFACE=p2p1
07:29 shri_ PUBLIC_INTERFACE=p2p1
07:29 shri_ FLAT_INTERFACEA=p2p1
07:29 shri_ FLAT_NETWORK_BRIDGE=p2p
07:30 shri_ VLAN_INTERFACE=p2p1
07:30 shri_ ----
07:30 samppah shri_: hmm, that doesn't sound it's caused by glusterfs?
07:30 shri_ samppah: yes.. it's different related to nova.config & some network related things
07:32 davidbierce joined #gluster
07:36 DV__ joined #gluster
07:39 _ndevos joined #gluster
07:42 JoeJulian No idea on that Fedora + libgfapi question. I run CentOS 6.4 in production and nobody's come here with any complaints with regard to Fedora.
07:43 JoeJulian Goodnight. o/
07:43 samppah Good night Joe
07:44 ngoswami joined #gluster
07:45 shri_ JoeJulian: thanks.. good night !
07:52 ekuric joined #gluster
07:56 franc joined #gluster
07:56 franc joined #gluster
07:59 ctria joined #gluster
08:03 raar joined #gluster
08:04 eseyman joined #gluster
08:18 keytab joined #gluster
08:20 nueces joined #gluster
08:23 Shri joined #gluster
08:40 shri_ joined #gluster
08:40 Shri joined #gluster
08:46 keytab joined #gluster
08:49 rotbeard joined #gluster
09:00 P0w3r3d joined #gluster
09:02 calum_ joined #gluster
09:05 meghanam_ joined #gluster
09:05 meghanam__ joined #gluster
09:07 DV joined #gluster
09:07 mgebbe_ joined #gluster
09:13 ndarshan joined #gluster
09:29 ninkotech joined #gluster
09:29 ninkotech__ joined #gluster
09:36 schrodinger_ joined #gluster
09:37 DV joined #gluster
09:43 ababu joined #gluster
09:51 meghanam_ joined #gluster
09:52 meghanam__ joined #gluster
09:52 mbukatov joined #gluster
10:21 shri joined #gluster
10:25 mohankumar joined #gluster
10:29 ndarshan joined #gluster
10:35 edward2 joined #gluster
10:38 bala joined #gluster
10:43 hagarth joined #gluster
11:14 bala joined #gluster
11:16 psharma joined #gluster
11:18 raghu left #gluster
11:20 diegows_ joined #gluster
11:29 FooBar joined #gluster
11:30 franc Hello, it is posible to convert a volume replicated to replicated-distributed?
11:33 samppah franc: just add more bricks and it will become distributed :)
11:36 franc samppah: thx! When I add a brick i get this message: volume add-brick: failed: Incorrect number of bricks supplied 1 with count 2
11:37 franc any idea?
11:37 RobertLaptop joined #gluster
11:38 samppah franc: did you create volume with replica 2? if so you need to add two bricks
11:38 samppah data will be replicated between them
11:40 B21956 joined #gluster
11:40 franc samppah: ok! Thank you! :)
11:41 samppah no problem :)
11:41 mjrosenb so, what does the .glusterfs directory do on a dht-only brick?
11:43 mjrosenb and what happens if it doesn't get populated (and can't get populated)
11:47 ndarshan joined #gluster
11:55 rwheeler joined #gluster
11:58 ndevos mjrosenb: also DHT uses some kind of replication, namely for directories - I expect to see that in the .glusterfs directory, and there are probably other usages too
12:00 ndevos mjrosenb: oh, and there are some functionalities that open a file by gfid, the .glusterfs directory makes that possible (not sure when those functions get used though)
12:01 bala joined #gluster
12:02 mjrosenb ndevos: ok, well .glusterfs is on a different file system from most of the data stored in the brick, so those hardlinks don't work.
12:08 rotbeard joined #gluster
12:08 ndevos mjrosenb: that doesnt sound really good... there should be no good reason to split the contents of a brick over more than one filesystem
12:10 rotbeard joined #gluster
12:12 rotbeard joined #gluster
12:20 rcheleguini joined #gluster
12:21 kanagaraj joined #gluster
12:24 aravindavk joined #gluster
12:24 mjrosenb ndevos: I have different setting for the different file systems.  Also, it helps out, since running df on the two bricks and adding is much faster than running du on a directory anywhere else.
12:24 ppai joined #gluster
12:24 shubhendu joined #gluster
12:25 ababu joined #gluster
12:26 vpshastry joined #gluster
12:32 gunthaa__ joined #gluster
12:34 glusterbot New news from resolvedglusterbugs: [Bug 1018178] Glusterfs ports conflict with qemu live migration <http://goo.gl/oDNTL3>
12:36 bstr joined #gluster
12:41 kkeithley mjrosenb: I'm sure it helps that way, but given how glusterfs is currently implemented, you're fighting a losing battle. There's no quick hack that will change the way glusterfs works. You are free  to download the source and change it to work the way you want. You can file an enhancement request ,,(bugzilla) asking for the change; I can't make any promises that it'll go anywhere.
12:41 glusterbot kkeithley: Error: No factoid matches that key.
12:41 kkeithley ,,(bugs)
12:41 glusterbot kkeithley: Error: No factoid matches that key.
12:41 kkeithley ,,(bug)
12:41 glusterbot kkeithley: Error: No factoid matches that key.
13:01 mbukatov joined #gluster
13:02 RameshN joined #gluster
13:05 davidbierce joined #gluster
13:06 glusterbot New news from newglusterbugs: [Bug 1024465] Dist-geo-rep: Crawling + processing for 14 million pre-existing files take very long time <http://goo.gl/BxNBkc>
13:09 CheRi joined #gluster
13:10 harish joined #gluster
13:22 davidbierce joined #gluster
13:34 glusterbot New news from resolvedglusterbugs: [Bug 834847] Fails to RPM update from 3.2.1 to 3.3.0 <http://goo.gl/uCrKwC>
13:35 ProT-0-TypE joined #gluster
13:36 glusterbot New news from newglusterbugs: [Bug 998967] gluster 3.4.0 ACL returning different results with entity-timeout=0 and without <http://goo.gl/B2gFno>
13:44 mbukatov joined #gluster
13:47 davidbierce joined #gluster
14:02 bennyturns joined #gluster
14:02 mbukatov joined #gluster
14:05 vpshastry left #gluster
14:10 stefanha_ joined #gluster
14:11 stefanha_ At what level does Gluster guarantee stability?  Command-line interface?  xlator configuration?  etc
14:13 T0aD 2 years guarantee if you bring it back sealed
14:13 stefanha_ Damn I already threw away the packaging
14:13 stefanha_ Seriously, I'm curious what the plan is since Gluster has 2 levels of interfaces: the command-line and the underlying xlator/volume config
14:14 xavih joined #gluster
14:18 ndk joined #gluster
14:23 kkeithley I'm not sure what you're asking. In general we tell people not to edit the vol files; if you know what you're doing though, you can do that.
14:24 kkeithley Community Gluster? There are no guarantees. ;-)
14:24 stefanha_ kkeithley: Two scenarios: You have a custom vol file and upgrade Gluster, does it always continue to work?
14:25 stefanha_ kkeithley: 2. You have scripts the invoke the Gluster CLI and upgrade, do the scripts still work?
14:26 kkeithley I'd say custom vol files should continue to work. YMMV. I'm not aware that we make any promises about the cli not changing from release to release.
14:26 P0w3r3d joined #gluster
14:28 Skaag joined #gluster
14:29 stefanha_ kkeithley: I'm a bit surprised.  Figured it would be the other way around.
14:29 stefanha_ kkeithley: For example, Puppet scripts or OpenStack would use the CLI so that needs to be stable.
14:31 kkeithley1 joined #gluster
14:34 shubhendu joined #gluster
14:34 kkeithley yeah. I'm just not aware that we make any promises. Maybe one of the other devs who knows something different will weigh in.
14:38 kkeithley Certain CLI commands like volume create and volume start will undoubtedly never change in an incompatible way. I'd guess some of the more esoteric CLI commands are the ones that could change
14:51 Debolaz joined #gluster
14:52 stefanha_ kkeithley: I see
14:59 dbruhn joined #gluster
15:00 neofob joined #gluster
15:02 zerick joined #gluster
15:16 wushudoin joined #gluster
15:19 bugs_ joined #gluster
15:26 lpabon joined #gluster
15:26 bnh2 joined #gluster
15:26 bnh2 Hii
15:27 bnh2 Anyone online
15:27 bnh2 I am having problems with GlusterFS speed
15:29 bnh2 Creating a file locally 40GB
15:29 bnh2 root@delivery:/# dd if=/dev/zero of=/mnt/local/bh.file bs=1M count=40000
15:29 bnh2 40000+0 records in
15:29 bnh2 40000+0 records out
15:29 bnh2 41943040000 bytes (42 GB) copied, 234.527 s, 179 MB/s
15:29 bnh2 Creating a file on glusterfs mount 40GB from client /data
15:29 bnh2 root@delivery:/data# dd if=/dev/zero of=/data/test.file bs=1M count=40000
15:29 bnh2 40000+0 records in
15:29 bnh2 40000+0 records out
15:29 bnh2 41943040000 bytes (42 GB) copied, 1230.94 s, 34.1 MB/s
15:29 bnh2 so copying files to the glusterfs is writting slow almost
15:29 bnh2 very slow
15:32 neofob bnh2: what is your server setup? distributed/duplicated...etc?
15:32 jbrooks joined #gluster
15:32 m0zes s/dupl/rep/
15:32 glusterbot m0zes: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
15:32 neofob m0zes: thanks
15:32 zaitcev joined #gluster
15:33 kPb_in_ joined #gluster
15:34 neofob have 2 servers in replica setup can slow you down on write because client sends data to both servers
15:41 bnh2 yes i have them replicated on three servers
15:42 neofob that will kill the bandwidth :D
15:43 neofob so 1GigE will be divided roughly into 3
15:43 neofob however, when you have many client/readers you will get the benefit of 3 servers
15:44 neofob so 34MB/s sounds about right when you write from one client
15:51 hchiramm__ joined #gluster
15:58 bnh2 neofob so is it nothing to do with the disk file format or Caches? or glusterfs??
15:58 bnh2 you think its because i have my volume setup as a replica
16:01 elyograg_ to convert from bytes per second seen on a transfer speed to bits per second, you multiple by approximately ten.  There are 8 bits per byte, plus packet and protocol overhead.  10 works out about right in most situations.  so you're seeing at least 300MB/s ... which when you multiply by three, is your gigabit link.
16:02 elyograg_ 300Mb/s that is.  not MB/s.
16:04 _polto_ joined #gluster
16:05 _polto_ hi
16:05 glusterbot _polto_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:06 neofob bnh2: yes, i think so; check with system monitor or some sort to see how much network activity is when you write
16:07 neofob as you describe, it should max out the bandwidth
16:08 neofob but from client apps point of view it can only write 1/3 of that max bandwidth -> 34MB/s
16:10 _polto_ I am trying to setup a remote failover using glusterfs. The basic setup works nicely and files are synced to the slave server. But writing to the gluster is very slow since the slave server have only a small ADSL connection. Will geo-replication helps here ? I think it simply need to be asynchronous..
16:12 dbruhn You should not be using replication over lines that slow and expect any sort of performance. If async is an option geo replication will work fine, as long as you have enough bandwidth to facilitate the replication transfer
16:13 hchiramm__ joined #gluster
16:19 _polto_ dblack, ok. thanks.
16:19 _polto_ I will play with geo-replication to understand how it works.
16:25 aravindavk joined #gluster
16:28 bnh2 neofob so what would you advice to get the best out of speed, so shall i mount using distributed?
16:29 bnh2 <neofob> and isnt 34MB quiet very low
16:30 failshell joined #gluster
16:35 kPb_in_ joined #gluster
16:36 dbruhn bnh2, why were you wanting to use replication or replication 3 in the first place?
16:37 bnh2 so files are copied across all three servers incase one dies then we can bring the other one up and change the mount on the client
16:38 dbruhn bn2h, are you planning on using the FUSE client or NFS?
16:39 bnh2 NFS
16:39 bnh2 i am using nfs at the moment
16:39 dbruhn Are you opposed to using the gluster fuse client?
16:39 bnh2 mount -t glusterfs localhost:/gluster /mnt/bnh2
16:40 dbruhn thats the gluster fuse client
16:40 dbruhn with that your clients connect to each of the serves in the gluster cluster
16:40 bnh2 sorry that's how i mounted them
16:40 dbruhn so, ways to improve your performance...
16:41 bnh2 yes please
16:41 dbruhn first off, replica 2 makes sense, I use it myself
16:41 bnh2 I have 3 clus's so clus1,clus2,clus3 and then Client
16:41 dbruhn secondly, gluster isn't going to magically fix performance issues, you need to put hardware under it that supports your needs
16:41 neofob bnh2: if don't need the uber-fault-tolerance then use 2 or 1 replica then have geo-replica
16:41 bnh2 cool sounds good what would you advice?
16:41 dbruhn If you need faster transfers, you need faster disk, and faster network to facilitate it
16:42 dbruhn replication comes as a cost on writes, but comes with a benefit on reads.
16:42 neofob that way you trade off the throughput for fault-tolerance
16:42 dbruhn you have given us one example of a 40gb file you are generating, but what do your real world storage usage characteristics look like
16:43 bnh2 about 2GB max
16:43 bnh2 video file
16:43 dbruhn ok, so you are dealing with larger files, how many clients?
16:43 bnh2 neofob - geo-replica preforms better you reckon?
16:43 bnh2 one client
16:43 bnh2 that has a web interface
16:44 bnh2 which connects to the data from the gluster
16:44 dbruhn Then I would be recommending spending some $$ on the proper network equipment, and disk counts to support your throughput needs.
16:44 dbruhn 10GB ethernet will dramatically improve your scenario
16:44 bnh2 cool I will probly do that.
16:44 bnh2 thanks
16:44 dbruhn How many hard drives are you running in each server? and are you using physical raid?
16:44 neofob bnh2: by using geo-replica you only do the sync at certain time
16:44 bnh2 for your support mate
16:45 dbruhn No problem
16:45 neofob yes, 10G ethernet or infiniband would do it :D
16:45 bnh2 neofob - that doesnt help as i will need live replication :s
16:45 dbruhn also, if 10GB e is too expensive, IB hardware can be had cheaper and is faster, just more to muck with
16:45 dbruhn IPoIB isn't a bad route right now
16:46 neofob bonding a bunch of gigE, perhaps?
16:46 bnh2 let me have a look at my options as the servers are brand new and i asked for best spec
16:46 bnh2 i paid alot for 8servers
16:46 dbruhn bonding still has latency issues, and doesn't scale as well, also adds a lot of complexity
16:47 dbruhn with big files latency isn't as big of a deal with gluster though
16:47 dbruhn a lot less access and just more throughput bound
16:49 neofob bnh2: perhaps you can export glusterfs mount from one server to many clients; that way the clients get the benefit if maxing out the bandwidth on write and that server will redistribute the I/O to other gluster servers
16:50 neofob dbruhn: would that work?
16:51 neofob but you don't get the benefit on read with replica/distribute
16:52 dbruhn neofob, sounds like he only has one client connection from a web server and he is using gluster as his file storage for video files.
16:52 dbruhn Putting load balancers, and additional web servers wouldn't really fix his issue
16:52 dbruhn and the way gluster works with file requests is it's first to respond to request from a replication group, so he still gets the read benefits
16:53 bnh2 okay
16:53 dbruhn My personal suggestion is more spindles, hardware RAID, and at min 10GBe, or move to infiniband with tcp/ip
16:53 dbruhn all of these things will increase his throughput capabilities
16:54 bnh2 So 34MB is what's expected out of my current setup so nothing wrong with how i setup my files
16:54 bnh2 I did check the disk system and network and bother turned out fine
16:54 bnh2 so best is to get IPoIB or 10GBe
16:54 dbruhn bnh2, It doesn't seem unreasonable
16:54 dbruhn how many disks are you running per server?
16:55 bnh2 and make sure my cisco switch allows that much bandwidth to transfer
16:55 bnh2 they are raided
16:55 dbruhn that is going to be another sticking point at some point in time
16:55 dbruhn raided, but how many disks?
16:55 bnh2 about 30drives of 300
16:55 bnh2 GB
16:55 dbruhn 15k sas?
16:56 bnh2 I will need to go server room and check mate
16:56 dbruhn kk
16:56 bnh2 and sorry its 30 of 1tb drives
16:56 dbruhn so 10 drives per server?
16:56 dbruhn or 30 drives per server?
16:56 bnh2 no 30 per server
16:57 dbruhn they are going to be 7200 rpm drives then
16:57 dbruhn at 1TB
16:57 dbruhn then yes, you are bound by your network limitations at this point
16:58 bnh2 they all connected to the same switch
16:58 bnh2 so when i do ipref
16:59 bnh2 iperf -s and listen to connections between all four servers
16:59 aliguori joined #gluster
16:59 dbruhn I am assuming 2 copies of data is enough for your data protection needs?
16:59 bnh2 [  5]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
16:59 bnh2 yep and one live
17:00 dbruhn Well replication is replication, it's not a backup
17:00 bnh2 true say
17:00 dbruhn I would suggest a replica2 system, and then use geo-replication, or a backup technology to back your data up to the third on interval
17:00 bnh2 so can't do much about that but see if i can increase the bandwidth on my NIC
17:00 bnh2 and routers/switches
17:00 bnh2 right
17:00 bnh2 I just wanted to confirm my setup is correct and nothing wrong
17:01 bnh2 with glusterfs setup
17:01 dbruhn I guess one thing no one has asked, is what is your throughput goal?
17:01 dbruhn You are running this through a website... so 34MB/s isn't a bad deal...
17:02 dewey joined #gluster
17:02 bnh2 basically what I am trying to do is " I have a content managment site running" with users across the company accessing this site to download or upload data
17:02 bnh2 now the server that host the site, its www folder is the glusterfs client
17:02 bnh2 which is mounted to clus1 .eg.
17:03 bnh2 so when a user logs in, they will delete a file from client folder and will replicate on all three but we plan to restrict access on the 3rd server so delete doesnt apply on that server
17:03 bnh2 which is something to do with user permission
17:03 bnh2 but yea
17:03 bnh2 they should be able to upload/download/edit and should get replicated
17:04 bnh2 34MB i think is slow looking at how long it will take to download or upload a 1GB file
17:04 Rav joined #gluster
17:05 neofob bnh2: also, make sure that all client/server have jumbo frame set (9K or so) get the last bit of your bandwidth and reduce cpu load
17:05 kPb_in__ joined #gluster
17:05 neofob if our switch supports it
17:05 neofob s/our/your/
17:05 glusterbot What neofob meant to say was: if your switch supports it
17:06 bnh2 thanks glusterb
17:07 dbruhn bnh2, sounds like you aren't really getting the gluster replication, it's get's copied and removed from all the replication pairs at the same time
17:07 dbruhn you don't want to try and tamper with the file systems underneath to break this
17:07 bnh2 sure neofob i will look into this later once all up and running
17:08 dbruhn From what you are saying here is that waiting 30 seconds for a 1GB file to download is too slow for your users?
17:08 dbruhn and how many users are uploading and downloading at the same time?
17:08 bnh2 no 30seconds for 1GB is very good
17:09 dbruhn well at 34MB/s it would take about 30 seconds to move 1GB of data
17:09 bnh2 but i am saying we have at least 100users downloading files at the same time while another 300 to 400 users viewing the files
17:09 dbruhn That's why I asked the number of users.
17:09 bnh2 34MB is 34Mb
17:09 bnh2 lol
17:09 dbruhn how many uploading at a time?
17:09 bnh2 sorry i am confused
17:10 bnh2 its 340Mb right?
17:10 dbruhn yep
17:10 bnh2 great stuff my problem is solved then
17:10 dbruhn At least I think so, I wasn't really involved in the first part of the conversation
17:10 bnh2 340Mb is what i was expecting for a file transfer
17:12 bnh2 basically when i tried at first it use to hang and freeze but now it works fine after following some instructions over the web
17:13 bnh2 basically i cleared caches
17:13 bnh2 echo 3 > /proc/sys/vm/drop_caches
17:14 pdrakeweb left #gluster
17:14 pdrakeweb joined #gluster
17:14 bnh2 and then added gluster volume set myvolume performance.cache-size 1GB
17:14 bnh2 and restarted the service
17:14 bnh2 and kept transfering at the rate of 34MB
17:14 bnh2 so i was thinking thats 34mb/s
17:14 bnh2 exit
17:15 bnh2 sorry
17:20 Technicool joined #gluster
17:30 |Rav| joined #gluster
17:38 Mo__ joined #gluster
17:42 neofob left #gluster
17:48 RedShift joined #gluster
17:58 vpshastry joined #gluster
18:05 vpshastry left #gluster
18:06 edong23 joined #gluster
18:07 bennyturns joined #gluster
18:07 edong23 joined #gluster
18:18 bulde joined #gluster
18:23 vpshastry joined #gluster
18:28 vpshastry left #gluster
18:28 DataBeaver joined #gluster
18:38 rotbeard joined #gluster
18:57 johnmwilliams joined #gluster
19:00 jbrooks joined #gluster
19:09 fyxim joined #gluster
19:09 mdjunaid joined #gluster
19:15 Fresleven joined #gluster
19:17 P0w3r3d joined #gluster
19:32 KORG joined #gluster
20:01 Ramereth|home joined #gluster
20:02 Peanut__ joined #gluster
20:04 johnmark_ joined #gluster
20:06 dbruhn joined #gluster
20:06 ingard_ joined #gluster
20:09 crashmag joined #gluster
20:09 crashmag joined #gluster
20:09 semiosis joined #gluster
20:09 l0uis joined #gluster
20:09 crashmag joined #gluster
20:09 l0uis joined #gluster
20:09 askb joined #gluster
20:13 badone_ joined #gluster
20:13 NuxRo joined #gluster
20:45 hngkr joined #gluster
20:52 edong23 joined #gluster
21:09 kaptk2 joined #gluster
21:13 Fresleven joined #gluster
21:28 kPb_in_ joined #gluster
21:31 Fresleven_ joined #gluster
21:34 peacock_ joined #gluster
21:47 diegows_ joined #gluster
22:16 RobertLaptop joined #gluster
22:24 fidevo joined #gluster
22:38 failshel_ joined #gluster
22:39 kodiakFiresmith joined #gluster
23:28 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary