Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 kdhananjay joined #gluster
00:41 halfinhalfout joined #gluster
01:03 halfinhalfout joined #gluster
01:22 rastar_afk joined #gluster
01:28 halfinhalfout joined #gluster
01:46 halfinhalfout joined #gluster
01:50 meghanam joined #gluster
02:08 harish_ joined #gluster
02:15 wkf joined #gluster
02:21 pppp joined #gluster
02:24 gildub joined #gluster
02:37 kdhananjay joined #gluster
02:47 ir2ivps5 joined #gluster
02:51 sakshi joined #gluster
02:56 bharata-rao joined #gluster
03:06 sakshi joined #gluster
03:11 kdhananjay joined #gluster
03:20 semiosis joined #gluster
03:31 shubhendu joined #gluster
03:36 Le22S joined #gluster
03:44 itisravi joined #gluster
03:46 atinmu joined #gluster
03:47 shubhendu_ joined #gluster
03:51 smohan joined #gluster
04:04 overclk joined #gluster
04:10 nbalacha joined #gluster
04:11 anrao joined #gluster
04:16 kumar joined #gluster
04:20 nishanth joined #gluster
04:22 RameshN joined #gluster
04:22 ndarshan joined #gluster
04:28 meghanam joined #gluster
04:29 anoopcs joined #gluster
04:35 rjoseph joined #gluster
04:37 jiffin joined #gluster
04:37 kanagaraj joined #gluster
04:38 kripper joined #gluster
04:39 kripper Hi, I need to reliable mount a gluster volume at boot preferably using systemd, since other service requires it
04:40 kripper I added a bunch of Require's to a systemd .mount file, but sometimes the host is not able to resolve names
04:40 anrao_ joined #gluster
04:40 kripper is there a reliable way to archieve it?
04:44 soumya_ joined #gluster
04:44 poornimag joined #gluster
04:45 hagarth joined #gluster
04:49 harish_ joined #gluster
04:49 raghug joined #gluster
05:03 gem joined #gluster
05:06 glusterbot News from resolvedglusterbugs: [Bug 1198573] libgfapi APIs overwrite the existing THIS value when called from other xlators like snapview <https://bugzilla.redhat.com/show_bug.cgi?id=1198573>
05:15 SOLDIERz joined #gluster
05:17 rafi joined #gluster
05:18 Apeksha joined #gluster
05:19 ppai joined #gluster
05:22 Bhaskarakiran joined #gluster
05:23 kshlm joined #gluster
05:25 rbazen joined #gluster
05:26 smohan_ joined #gluster
05:27 gem joined #gluster
05:28 hgowtham joined #gluster
05:31 lalatenduM joined #gluster
05:35 smohan joined #gluster
05:35 sakshi joined #gluster
05:37 kdhananjay joined #gluster
05:38 hgowtham joined #gluster
05:38 pdrakewe_ joined #gluster
05:41 anil joined #gluster
05:50 dusmant joined #gluster
05:52 Anjana joined #gluster
05:52 hagarth joined #gluster
05:55 Guest4581 joined #gluster
05:57 ktosiek joined #gluster
06:01 spandit joined #gluster
06:04 deepakcs joined #gluster
06:04 glusterbot News from newglusterbugs: [Bug 1217311] Disperse volume: gluster volume status doesn't show shd status <https://bugzilla.redhat.com/show_bug.cgi?id=1217311>
06:06 mbukatov joined #gluster
06:10 cholcombe joined #gluster
06:15 SOLDIERz joined #gluster
06:16 sakshi joined #gluster
06:17 nsoffer joined #gluster
06:23 atalur joined #gluster
06:26 smohan_ joined #gluster
06:27 jtux joined #gluster
06:30 ghenry joined #gluster
06:34 glusterbot News from newglusterbugs: [Bug 1217322] Disperse volume: Transport endpoint not connected in nfs log messages though the volume is started <https://bugzilla.redhat.com/show_bug.cgi?id=1217322>
06:35 ktosiek joined #gluster
06:36 raghu joined #gluster
06:48 LebedevRI joined #gluster
06:50 liquidat joined #gluster
06:52 liquidat joined #gluster
06:52 ashiq joined #gluster
06:56 Guest4581 joined #gluster
06:57 kshlm joined #gluster
07:01 schandra joined #gluster
07:02 al joined #gluster
07:04 kanagaraj_ joined #gluster
07:12 coredump joined #gluster
07:14 Anjana joined #gluster
07:15 m0zes joined #gluster
07:18 JPaul joined #gluster
07:20 dusmant joined #gluster
07:26 hgowtham joined #gluster
07:27 itisravi joined #gluster
07:29 jcastill1 joined #gluster
07:29 o5k_ joined #gluster
07:30 schandra1 joined #gluster
07:32 SOLDIERz joined #gluster
07:34 jcastillo joined #gluster
07:37 fsimonce joined #gluster
07:39 hagarth joined #gluster
07:44 ctria joined #gluster
07:45 ctria joined #gluster
07:47 dusmant joined #gluster
07:48 cholcombe joined #gluster
07:48 mmbash joined #gluster
07:51 Apeksha joined #gluster
07:55 DV joined #gluster
08:03 Leildin joined #gluster
08:12 kripper left #gluster
08:12 kripper joined #gluster
08:12 kripper left #gluster
08:18 deniszh joined #gluster
08:21 kovshenin joined #gluster
08:23 liquidat joined #gluster
08:26 smohan joined #gluster
08:28 kovshenin joined #gluster
08:31 ktosiek joined #gluster
08:32 kovshenin joined #gluster
08:33 nsoffer joined #gluster
08:34 Norky joined #gluster
08:36 kovshenin joined #gluster
08:39 kanagaraj joined #gluster
08:40 kovshenin joined #gluster
08:41 [Enrico] joined #gluster
08:47 SOLDIERz joined #gluster
08:50 yosafbridge joined #gluster
08:54 klaxa|work joined #gluster
08:55 Apeksha joined #gluster
08:56 Anjana joined #gluster
08:56 kovshenin joined #gluster
08:58 soumya_ joined #gluster
09:03 kovsheni_ joined #gluster
09:04 deniszh joined #gluster
09:05 glusterbot News from newglusterbugs: [Bug 1217372] Disperse volume: NFS client mount point hung after the bricks came back up <https://bugzilla.redhat.com/show_bug.cgi?id=1217372>
09:06 spalai joined #gluster
09:07 Leildin Hi, has anyone had any problems writing files to gluster volumes via python scripts ? we seem to be getting closed off file descriptors for some reason
09:08 dusmantkp_ joined #gluster
09:08 spalai left #gluster
09:09 spalai joined #gluster
09:11 anrao joined #gluster
09:11 anrao_ joined #gluster
09:12 rafi joined #gluster
09:14 rafi joined #gluster
09:16 schandra2 joined #gluster
09:18 gildub joined #gluster
09:33 o5k joined #gluster
09:36 atalur joined #gluster
09:44 kovshenin joined #gluster
09:48 SOLDIERz joined #gluster
09:48 atinmu joined #gluster
09:51 rafi1 joined #gluster
09:55 suliba joined #gluster
10:02 Norky joined #gluster
10:03 ashiq joined #gluster
10:04 Bhaskarakiran_ joined #gluster
10:05 atinmu joined #gluster
10:06 kripper joined #gluster
10:07 Anjana joined #gluster
10:07 dusmantkp_ joined #gluster
10:09 mtpmoni joined #gluster
10:13 ira joined #gluster
10:16 DV_ joined #gluster
10:18 ira joined #gluster
10:27 ashiq joined #gluster
10:29 atalur joined #gluster
10:34 RameshN joined #gluster
10:38 kkeithley1 joined #gluster
10:38 kripper1 joined #gluster
10:44 saurabh_ joined #gluster
10:44 ndarshan joined #gluster
10:47 hagarth joined #gluster
10:49 SOLDIERz joined #gluster
10:53 dusmantkp_ joined #gluster
10:56 suliba joined #gluster
11:00 saurabh_ joined #gluster
11:05 SOLDIERz joined #gluster
11:05 smohan joined #gluster
11:05 glusterbot News from newglusterbugs: [Bug 1217429] geo-rep: add debug logs to master for slave ENTRY operation failures <https://bugzilla.redhat.com/show_bug.cgi?id=1217429>
11:11 ashiq- joined #gluster
11:14 harish_ joined #gluster
11:19 kovshenin joined #gluster
11:20 coreping joined #gluster
11:20 _Bryan_ joined #gluster
11:23 coreping joined #gluster
11:24 anrao_ joined #gluster
11:26 anrao joined #gluster
11:32 here_and_there JoeJulian: thanks, I did it, with no troubles, indeed
11:32 coreping joined #gluster
11:36 glusterbot News from newglusterbugs: [Bug 1210404] BVT; Selinux throws AVC errors while running DHT automation on Rhel6.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1210404>
11:36 cholcombe joined #gluster
11:40 dusmantkp_ joined #gluster
11:40 bennyturns joined #gluster
11:53 kovsheni_ joined #gluster
11:55 atalur joined #gluster
11:59 RameshN joined #gluster
12:05 rafi joined #gluster
12:06 glusterbot News from newglusterbugs: [Bug 1217445] data tiering: tiering core functionality Data heating/cooling not working on a tiered volume <https://bugzilla.redhat.com/show_bug.cgi?id=1217445>
12:06 kovshenin joined #gluster
12:07 vovcia joined #gluster
12:07 vovcia hi o/
12:11 vovcia do You know of any issues with very slow locking on dispersed volumes?
12:17 kshlm joined #gluster
12:19 lalatenduM joined #gluster
12:19 nsoffer joined #gluster
12:24 lalatenduM joined #gluster
12:25 atalur joined #gluster
12:26 saurabh_ joined #gluster
12:29 anil joined #gluster
12:29 sakshi joined #gluster
12:29 spandit joined #gluster
12:29 anrao joined #gluster
12:29 maveric_amitc_ joined #gluster
12:29 soumya_ joined #gluster
12:29 meghanam joined #gluster
12:29 rp joined #gluster
12:29 hchiramm joined #gluster
12:29 hagarth joined #gluster
12:30 rastar_afk joined #gluster
12:31 rafi joined #gluster
12:31 sac joined #gluster
12:31 kdhananjay joined #gluster
12:35 anoopcs joined #gluster
12:35 bene2 joined #gluster
12:39 spalai joined #gluster
12:43 ndevos REMINDER: in ~45 minutes from now, there will be a presentation/showcase of NFS-Ganesha for Gluster: https://plus.google.com/events/c9omal6366f2cfkcd0iuee5ta1o
12:49 ashiq- joined #gluster
12:53 Slashman joined #gluster
12:54 wkf joined #gluster
12:55 lalatenduM joined #gluster
12:57 Gill joined #gluster
13:06 [Enrico] joined #gluster
13:11 kovshenin joined #gluster
13:13 halfinhalfout joined #gluster
13:15 SOLDIERz joined #gluster
13:19 halfinhalfout joined #gluster
13:24 atalur joined #gluster
13:29 soumya_ joined #gluster
13:31 meghanam joined #gluster
13:31 meghanam presentation/showcase of NFS-Ganesha for Gluster: https://plus.google.com/events/c9omal6366f2cfkcd0iuee5ta1o
13:32 meghanam starting in 3 minutes
13:34 SOLDIERz joined #gluster
13:35 ppai joined #gluster
13:38 jobewan joined #gluster
13:41 hamiller joined #gluster
13:49 cholcombe joined #gluster
13:51 theron joined #gluster
13:54 georgeh-LT2 joined #gluster
13:57 shubhendu_ joined #gluster
14:02 hagarth joined #gluster
14:02 marbu joined #gluster
14:06 julim joined #gluster
14:08 wushudoin joined #gluster
14:14 rafi joined #gluster
14:14 anoopcs joined #gluster
14:14 anil joined #gluster
14:14 soumya_ joined #gluster
14:14 ppai joined #gluster
14:14 rp joined #gluster
14:15 hagarth joined #gluster
14:15 bennyturns joined #gluster
14:15 maveric_amitc_ joined #gluster
14:15 sac joined #gluster
14:16 hchiramm joined #gluster
14:16 SOLDIERz joined #gluster
14:23 wushudoin joined #gluster
14:25 lalatenduM joined #gluster
14:27 plarsen joined #gluster
14:29 kaushal_ joined #gluster
14:33 nbalacha joined #gluster
14:36 kshlm joined #gluster
14:36 glusterbot News from newglusterbugs: [Bug 1138992] gluster.org broken links <https://bugzilla.redhat.com/show_bug.cgi?id=1138992>
14:36 glusterbot News from newglusterbugs: [Bug 1157462] Dead Link - Missing Documentation <https://bugzilla.redhat.com/show_bug.cgi?id=1157462>
14:37 roost joined #gluster
14:42 lpabon joined #gluster
14:42 cholcombe joined #gluster
14:45 mbukatov joined #gluster
14:49 saurabh joined #gluster
14:52 siel joined #gluster
14:56 georgeh-LT2 joined #gluster
14:59 kaushal_ joined #gluster
15:03 SOLDIERz joined #gluster
15:12 rafi joined #gluster
15:16 cholcombe joined #gluster
15:29 kovsheni_ joined #gluster
15:35 deepakcs joined #gluster
15:39 m0zes joined #gluster
15:47 anrao joined #gluster
15:50 kripper1 my gluster mounts are froze
15:50 kripper1 frozen
15:50 kripper1 can someone please help me to debug?
15:53 Leildin They'll need to see some logs to help, I think
15:54 Leildin what is frozen, the mount points or the volume ?
15:54 Leildin gluster volume status
15:54 Leildin and all that stuff
15:58 kripper1 mount point
15:59 kripper1 no, the volume, I tried to mount on another dir
15:59 kripper1 what logs are relevant?
16:01 kripper1 Logs: http://fpaste.org/217263/30409655/
16:01 kripper1 ed to set keep-alive: Invalid argument
16:01 kripper1 [2015-04-30 16:01:22.890220] W [socket.c:923:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 43, Invalid argument
16:01 kripper1 [2015-04-30 16:01:22.890232] E [socket.c:3015:socket_connect] 0-management: Failed to set keep-alive: Invalid argument
16:01 kripper1 [2015-04-30 16:01:25.892526] W [socket.c:923:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 43, Invalid argument
16:01 kripper1 [2015-04-30 16:01:25.892541] E [socket.c:3015:socket_connect] 0-management: Failed to set keep-alive: Invalid argument
16:01 kripper1 [2015-04-30 16:01:28.894790] W [socket.c:923:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 43, Invalid argument
16:01 kripper1 [2015-04-30 16:01:28.894808] E [socket.c:3015:socket_connect] 0-management: Failed to set keep-alive: Invalid argument
16:02 kripper1 how can I debug?
16:04 kripper1 please forget this logs, a node was down. This are not the problem
16:05 kripper1 [2015-04-30 16:05:10.727268] E [rpc-clnt.c:201:call_bail] 0-datacenter-client-2: bailing out frame type(GlusterFS 3.3) op(LOOKUP(27)) xid = 0x8cdb sent = 2015-04-30 15:35:01.679530. timeout = 1800 for 209.126.105.98:49155
16:05 kripper1 [2015-04-30 16:05:10.727311] W [client-rpc-fops.c:2825:client3_3_lookup_cbk] 0-datacenter-client-2: remote operation failed: Transport endpoint is not connected. Path: / (00000000-0000-0000-0000-000000000001)
16:09 kshlm joined #gluster
16:17 ktosiek joined #gluster
16:18 markfletcher joined #gluster
16:22 markfletcher Im interested in testing Gluster 3.7 - which I understand has support for small files. Are there rpm’s available that I can use? Im running RHEL 7.1
16:24 raza_wrk joined #gluster
16:24 raza_wrk Is gluster a production ready choice for VM storage?
16:31 markfletcher_ joined #gluster
16:33 kripper1 raza_wrk: I don't think so
16:34 raza_wrk kripper1, can you expand on that please?
16:34 kripper1 I'm dealing with deadlocks
16:35 kripper1 besides, QEMU doesn't reopen file descriptors. If a gluster process is restarted, the VM stales
16:36 raza_wrk does that still happen with something like open vstorage inbetween the hypervisor and the storage?
16:36 kripper1 there are reports of sanlock blocking gluster easily
16:37 kripper1 I believe that the problem at the end is in gluster
16:37 kripper1 it's to easy to have the storage locked
16:38 raza_wrk gluster or opject based storage in general
16:38 kripper1 gluster I guess
16:38 cholcombe joined #gluster
16:38 kripper1 it's not recovering and deadlocking
16:39 kripper1 and I think it's not even network related
16:39 hagarth joined #gluster
16:40 kripper1 I hope we can fix it
16:40 raza_wrk yea that's sounds very unfun
16:40 kripper1 I took logs and straces, but since I don't know the internals, I don't know what to do with them
16:41 kripper1 I will report it and try to debug it
16:41 markfletcher Anyone know if 3.7 is still on for release May 6th?
16:49 gem joined #gluster
16:50 rcschool_ joined #gluster
16:50 JoeJulian markfletcher: http://meetbot.fedoraproject.org/gluster-meeting/2015-04-29/gluster-meeting.2015-04-29-12.01.html
16:51 JoeJulian click the link on the agenda item to see the actual channel log with the conversatin.
16:51 markfletcher Awesome! thank you, reading now
16:53 JoeJulian raza_wrk: yes, a lot of people use gluster for vm storage.
16:55 raza_wrk JoeJulian, so how does that work object based storage is puts and gets where vm's require block storage
16:55 nsoffer joined #gluster
16:55 JoeJulian kripper1: [2015-04-30 13:54:29.769228] E [socket.c:2332:socket_connect_finish] 0-datacenter-client-1: connection to 209.126.107.76:24007 failed (No route to host)
16:55 JoeJulian Might want to fix that.
16:56 JoeJulian raza_wrk: glusterfs is a posix filesystem.
16:56 ashiq- joined #gluster
16:56 JoeJulian If you use kvm, qemu even supports connecting directly to the volume using libgfapi which avoids a lot of context switches.
16:56 markfletcher JoeJulian, I’ll probably signup to the mailing lists so I’ll get notified when rpms become available
16:56 rcschool joined #gluster
16:57 JoeJulian Sounds like a plan. :D
16:57 raza_wrk how does gluster differ from say ceph?
16:58 JoeJulian raza_wrk: And as kripper1 has shown, make sure your network is working to avoid the issues he's describing.
17:01 nbalacha joined #gluster
17:02 JoeJulian ceph is block storage. It puts chunks of data on a cluster of storage servers to create a block device. It's much more difficult to set up, even more difficult to set up efficiently, and takes an expert to solve any issues that come up. It's also a little bit slower. On the plus side, it's very robust in terms of data integrity. If you lose an osd, after a timeout period a new replica is made automatically to some other osd. If you're retiring
17:02 JoeJulian an osd, set it to out and wait for the pgs to be moved. Down it, remove it, done.
17:02 uxbod joined #gluster
17:03 coredump joined #gluster
17:04 ekuric joined #gluster
17:05 JoeJulian Gluster is a filesystem. It stores files whole* and keeps replicas on pre-defined sets*. The migration when removing a brick is not reliable, nor is rebalance (same fault). It's easy to set up, works as efficiently as possible right out of the box, and is very fast when connecting over libgfapi. If something does go wrong, it doesn't take an expert to fix it and with the support you can get in this channel alone, most things can be fixed pretty
17:05 JoeJulian quickly.
17:05 JoeJulian * those features are currently being improved.
17:05 rcschool joined #gluster
17:06 JoeJulian There's work being done on rebalance, too, which will hopefully fix the brick removal and rebalance issues.
17:08 markfletcher Im interested in v3.7 for performance re: small files. Im trying to host wordpress sites on a glusterfs volume and copying files to the volume is slow - if you have any perf tuning tips Im all ears
17:08 raza_wrk when you say removing a brick, do you mean it loses a storage device?
17:09 raza_wrk Sorry i am a dumb CIO not an admin :)
17:10 JoeJulian bricks are the storage devices. Removal would be a planned event.
17:10 raza_wrk if one fails?
17:10 raza_wrk do you have the same reblance issue
17:11 JoeJulian No, if it fails you're not moving data from it anyway. Just replace it and let replication take care of the rest.
17:11 raza_wrk If I'm going to be running core production VMs with say Microsoft Exchange or mysql what is my exposure if a brick fails
17:11 raza_wrk okay
17:11 JoeJulian I'd say your biggest exposure is running Exchange. ;)
17:12 raza_wrk Well if open source had a real option it would be considered. :)
17:12 jackdpeterson joined #gluster
17:14 JoeJulian If I had to run exchange, I'd put postfix with amavisd in front of it. The spam filtration we get on our hosted exchange at IO is abysmal.
17:14 raza_wrk We use barracuda appliances
17:14 JoeJulian That'll work. :)
17:15 * JoeJulian just isn't a MS fan.
17:15 JoeJulian I know way too many employees.
17:15 raza_wrk I wouldn't expect to find many in this channel
17:15 JoeJulian I also live in the Puget Sound area.
17:16 JoeJulian Nothing like knowing how a company disfunctions to give you a bad taste about how their software is built.
17:17 JoeJulian This new CEO is improving things though.
17:22 kripper1 https://bugzilla.redhat.com/show_bug.cgi?id=1217576
17:22 glusterbot Bug 1217576: urgent, unspecified, ---, rhs-bugs, NEW , Gluster volume locks the whole cluster
17:25 Rapture joined #gluster
17:29 theron_ joined #gluster
17:39 RameshN joined #gluster
17:40 hflai_ joined #gluster
17:41 o5k_ joined #gluster
17:41 xavih_ joined #gluster
17:42 partner joined #gluster
17:42 R0ok__ joined #gluster
17:42 msvbhat_ joined #gluster
17:42 nixpanic_ joined #gluster
17:42 jotun_ joined #gluster
17:42 Bardack_ joined #gluster
17:42 nixpanic_ joined #gluster
17:42 tom][ joined #gluster
17:45 mattmcc_ joined #gluster
17:45 sac` joined #gluster
17:45 necrogami_ joined #gluster
17:45 B21956 joined #gluster
17:46 kumar joined #gluster
17:46 Alpinist joined #gluster
17:47 uxbod joined #gluster
17:49 nhayashi joined #gluster
17:49 JustinCl1ft joined #gluster
17:49 marcoceppi_ joined #gluster
17:49 ktosiek joined #gluster
17:49 bennyturns joined #gluster
17:49 zerick joined #gluster
17:49 T0aD joined #gluster
17:49 edong23 joined #gluster
17:49 Lee- joined #gluster
17:50 JustinCl1ft joined #gluster
17:52 tessier Hmm....Why am I only getting around 30MB/s copying a file onto my gluster mounted fs and while doing so glusterd is using 30% of the CPU? Not very impressive performance-wise.
17:53 bfoster joined #gluster
17:54 cholcombe joined #gluster
17:54 jobewan joined #gluster
17:54 roost joined #gluster
17:54 kbyrne joined #gluster
18:03 rbazen Good evening, I have a question. I have a configured gluster server with 8 disks on raid5, the raid is formatted xfs and mounted. I created a volume and filled it with data
18:04 JoeJulian tessier: how may replica?
18:04 rbazen Now I want to remove the volume and remake it as a replica, but will gluster know not to remove the data?
18:04 JoeJulian Gluster doesn't remove the data when a volume is deleted.
18:05 JoeJulian To reuse the bricks, you'll need to look at the following: ,,(path or prefix)
18:05 glusterbot following: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
18:05 glusterbot http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
18:05 JoeJulian Also be aware of ,,(brick order)
18:05 glusterbot Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
18:06 rbazen Ok, thanks!
18:11 tessier JoeJulian: Two replicas.
18:11 JoeJulian So you're writing at 60MB/s
18:13 tessier I suppose. Still seems slow. The drives themselves can do 70MB/s at least and the gigabit ethernet with iSCSI and other protocols typically does 100MB/s
18:14 JoeJulian Is this single-file writes?
18:15 siel joined #gluster
18:15 tessier Yes
18:17 JoeJulian I have always been able to max out any network connection. Make sure you're block size is sufficient to fill your mtu.
18:17 rbazen So, if I do gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 ; it will copy the existing data on server1:/data/brick1 to server2? I really dont want to lose the 25TB :P
18:17 tessier Ah, that's probably part of it. I need to move the gluster activity onto my 9000 mtu segment.
18:17 JoeJulian Also, that much cpu usage for simple writes is uncommon. Perhaps you have some other hardware issue.
18:18 JoeJulian rbazen: yes
18:19 rbazen JoeJulian: Thanks a lot!
18:20 tessier It's a dual core Xeon 5110 @ 1.6Ghz. Not real fast or modern but 30 or 60MB/s shouldn't phase it.
18:21 JoeJulian There's like 3 context switches per packet (iirc) so cpu and memory speed could possibly come in to play.
18:23 kripper joined #gluster
18:26 JoeJulian "Leildin> ... regarding the "fragmentation" on a volume would the rebalance tools act as a fragmentation remover while it shifts data ?"
18:27 JoeJulian Leildin: no. rebalancing will only move files of which the hash of the filename compared with the dht map don't match. The bricks themselves may still be fragmented and moving files between bricks will not cure that. (in fact, it could potentially make it worse).
18:28 JoeJulian Odd are, the fragmentation isn't going to matter though. Multiple clients with multiple file needs could potentially make your disk readahead cache worthless anyway.
18:28 Philambdo joined #gluster
18:32 Rapture joined #gluster
18:37 glusterbot News from newglusterbugs: [Bug 1217589] glusterd crashed while schdeuler was creating snapshots when bit rot was enabled on the volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1217589>
18:47 theron joined #gluster
18:53 Philambdo joined #gluster
18:58 rbazen Hmm. removing the old .glusterfs directory. It's huge. :o
18:58 rbazen or has a lot of files
19:02 JoeJulian It has a hardlink for every file, and a symlink for every directory.
19:02 Bardack joined #gluster
19:03 rbazen rm-ing a hardlink just clears the inode right 0_0
19:03 * rbazen is in paranoia mode
19:04 rbazen Ok, good
19:05 JoeJulian Yeah, it just clears the filename, reducing the inode link count.
19:06 * rbazen starts to wonder in what case gluster actually will nuke all the data.
19:06 rbazen so far it has been pretty darned forgiving
19:13 rbazen @JoeJulian: Found a bug on your website. The tags on for example: https://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/ cause a HTTP/404
19:13 rbazen Just a heads up
19:14 halfinhalfout1 joined #gluster
19:20 anrao joined #gluster
19:22 77CAAR0CL joined #gluster
19:22 schwing_ joined #gluster
19:25 rcschool joined #gluster
19:25 kripper joined #gluster
19:27 rcschool joined #gluster
19:29 JoeJulian thanks
19:30 cholcombe joined #gluster
19:32 kripper left #gluster
19:33 vincent_1dk joined #gluster
19:34 rshade98 joined #gluster
19:34 schwing_ i just upgraded my gluster to 3.6.2 with ubuntu's PPAs and am now unable to mount remote clients (which are also upgraded).  if i switch the type in my mount command to nfs they mount fine, but not as type gluserfs
19:34 harish_ joined #gluster
19:34 schwing_ [2015-04-30 19:34:01.178063] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
19:35 schwing_ [2015-04-30 19:34:01.178107] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/gv0)
19:36 schwing_ this is my entry from /etc/fstab:
19:36 schwing_ gluster-d01:/gv0 /mnt/glusterglusterfsdefaults,_netdev0 0
19:36 rbazen schwing_: I found this: https://www.gluster.org/pipermail/gluster-users.old/2015-February/020872.html
19:37 rbazen It might be related
19:40 schwing_ thanks!  i'll give that a shot.
19:41 rbazen Or more the link on that page: https://www.gluster.org/pipermail/gluster-users/2015-February/020781.html
19:41 schwing_ yea, that's the one that talks about regenerating the volfile info.  hoping that fixes it
19:43 schwing_ ugh. ... glusterd: invalid xlator option  *upgrade=on
19:45 rbazen https://www.gluster.org/pipermail/gluster-users/2013-January/012345.html
19:46 rbazen # glusterd --xlator-option *.upgrade=on -N
19:51 schwing_ ah, missing a dot
19:52 schwing_ it threw some RDMA errors but updated the files in /var/lib/glusterd/vols/gv0/
19:56 theron_ joined #gluster
19:59 deniszh joined #gluster
20:00 schwing_ awesome!  that fixed it.  i can mount my remote client.
20:01 rbazen Nice!
20:01 schwing_ thanks for the help, rbazen!
20:01 rbazen Just passing it forward.
20:01 rcschool joined #gluster
20:04 rbazen Maker, deleting 25TB of .glusterfs on a raid5 takes a long time -_-
20:06 Jmainguy lol
20:06 Jmainguy nice
20:06 Jmainguy I took out 8tb pretty quick
20:06 Jmainguy mkfs.xfs /blah
20:06 Jmainguy they were a little angry with me
20:06 rbazen Heh.... auch
20:24 Rapture joined #gluster
20:33 rbazen Ah, interesting. xfs is known for having poor performance when deleting(unlinking) lots and lots of nested directories
20:33 JoeJulian When?
20:33 JoeJulian Make sure you're looking at current information. xfs has changed a lot over the years.
20:33 * rbazen nods
20:34 rbazen Indeed, most of these posts are pretty dated.
20:34 rbazen I am just not knowledgeble yet about filesystems to make good calls on optimisations..
20:37 rbazen also, running a df -h seems to take a long time, but iostat shows no activity.
20:38 social joined #gluster
20:40 schwing_ left #gluster
20:41 rbazen Sigh, found the problem. Had an open nfs mount still...
20:41 rbazen one lazy unmount later...
20:44 halfinhalfout joined #gluster
20:44 coredump joined #gluster
20:49 markfletcher left #gluster
21:04 theron joined #gluster
21:08 dewey joined #gluster
21:12 coredump joined #gluster
21:33 wkf joined #gluster
21:34 redbeard joined #gluster
21:41 6JTAA0TVB joined #gluster
21:47 kovshenin joined #gluster
21:49 stickyboy joined #gluster
21:50 kovsheni_ joined #gluster
21:50 mkzero joined #gluster
21:51 lexi2 joined #gluster
21:53 kovshenin joined #gluster
21:56 kovshenin joined #gluster
21:58 kovshenin joined #gluster
22:06 5EXAA363F joined #gluster
22:35 julim joined #gluster
23:09 Pupeno joined #gluster
23:14 masterzen joined #gluster
23:18 Pupeno joined #gluster
23:23 eightyeight i'm upgrading from 3.2 -> 3.5 on two debian 8 KVM hypervisors
23:23 eightyeight glusterfs is used for VM image replication
23:23 eightyeight i've outlined my steps for the upgrade as follows: http://ae7.st/p/199
23:23 eightyeight anything i'm missing?
23:27 gildub joined #gluster
23:31 bit4man joined #gluster
23:38 JoeJulian When you stop the glusterfs daemon, that will only stop glusterd. I would stop the volume before stopping glusterfs-server (ie. gluster volume stop $vol). Maybe even a pkill -f gluster after stopping the service just to be sure everything has stopped.
23:42 eightyeight ok. good call.
23:48 eightyeight updated: http://ae7.st/p/8wa
23:50 JoeJulian "pkill -f gluster". Kills anything with "gluster" in the command line, including gluster, glusterfsd, and glusterd.
23:52 eightyeight that's mostly a placeholder. i actually intend on getting the pid via "ps -fC glusterfs" and "ps -fC glusterfsd", and killing them manually, one at a time
23:54 JoeJulian looks good to me then

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary