Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 hybrid512 joined #gluster
00:04 T3 joined #gluster
00:19 doekia joined #gluster
00:24 Pupeno joined #gluster
00:46 sputnik13 joined #gluster
00:56 Pupeno joined #gluster
01:00 side_control joined #gluster
01:03 bala joined #gluster
02:02 lyang0 joined #gluster
02:08 haomaiwa_ joined #gluster
02:17 nangthang joined #gluster
02:22 gem joined #gluster
03:03 T3 joined #gluster
03:08 bharata-rao joined #gluster
03:09 sputnik13 joined #gluster
03:10 gildub joined #gluster
03:20 gem joined #gluster
03:20 T3 joined #gluster
03:31 RameshN joined #gluster
03:40 sputnik13 joined #gluster
03:45 kshlm joined #gluster
03:49 shubhendu joined #gluster
03:50 itisravi joined #gluster
03:52 RameshN joined #gluster
04:00 nishanth joined #gluster
04:00 T3 joined #gluster
04:00 nbalacha joined #gluster
04:03 meghanam joined #gluster
04:17 dusmant joined #gluster
04:23 atinmu joined #gluster
04:27 spandit joined #gluster
04:31 ndarshan joined #gluster
04:32 Pupeno_ joined #gluster
04:39 kanagaraj joined #gluster
04:40 Manikandan joined #gluster
04:43 sakshi joined #gluster
04:44 rafi joined #gluster
04:51 anoopcs joined #gluster
05:08 soumya__ joined #gluster
05:11 prasanth_ joined #gluster
05:15 bala joined #gluster
05:20 nshaikh joined #gluster
05:25 shylesh__ joined #gluster
05:26 ppai joined #gluster
05:32 jiffin joined #gluster
05:32 jobewan joined #gluster
05:39 maveric_amitc_ joined #gluster
05:44 dusmant joined #gluster
05:46 rjoseph joined #gluster
05:51 shubhendu joined #gluster
05:51 ndarshan joined #gluster
06:02 T3 joined #gluster
06:10 soumya__ joined #gluster
06:16 shubhendu joined #gluster
06:17 ramteid joined #gluster
06:17 ndarshan joined #gluster
06:19 bharata-rao joined #gluster
06:20 aravindavk joined #gluster
06:27 kdhananjay joined #gluster
06:31 anil joined #gluster
06:31 gem joined #gluster
06:36 dusmant joined #gluster
06:48 nishanth joined #gluster
06:49 rjoseph joined #gluster
06:57 nangthang joined #gluster
07:01 ndarshan joined #gluster
07:01 glusterbot News from newglusterbugs: [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1075417>
07:05 anrao joined #gluster
07:07 mbukatov joined #gluster
07:10 hchiramm joined #gluster
07:12 T0aD- joined #gluster
07:17 rjoseph joined #gluster
07:17 RameshN joined #gluster
07:19 raghu joined #gluster
07:24 kanagaraj joined #gluster
07:26 jtux joined #gluster
07:40 jtux joined #gluster
07:40 ricky-ti1 joined #gluster
07:42 ricky-ti1 joined #gluster
07:43 lalatenduM joined #gluster
07:43 dusmant joined #gluster
07:43 [Enrico] joined #gluster
07:43 [Enrico] joined #gluster
07:45 kovshenin joined #gluster
07:51 ricky-ticky joined #gluster
07:52 ricky-ti2 joined #gluster
07:52 kovsheni_ joined #gluster
07:58 smohan joined #gluster
08:01 glusterbot News from newglusterbugs: [Bug 1151696] mount.glusterfs fails due to race condition in `stat` call <https://bugzilla.redhat.com/show_bug.cgi?id=1151696>
08:01 glusterbot News from newglusterbugs: [Bug 1188145] Disperse volume: I/O error on client when USS is turned on <https://bugzilla.redhat.com/show_bug.cgi?id=1188145>
08:11 samuel-et00 joined #gluster
08:14 samuel-et00 i am planning to use gluster for a small poc using ssd disks, each 1 Tb.How much iops can i expect?whats the limiting factor in gluster that prevents from utilizing full speed of ssd.I have 10Gbps NIC
08:16 samuel-et00 if my server has 3Gbps SATA controller will the maximum throughput  be limited to 3 ? or can i get more throughput by parallelising the write to 4 servers and overcome the limit of this 3Gbps
08:18 deniszh joined #gluster
08:30 rwheeler joined #gluster
08:30 Slashman joined #gluster
08:31 smohan joined #gluster
08:32 overclk joined #gluster
08:32 abyss^ samuel-et00: wait for evening, there should be more people who knows glusterfs. IOPS you should do some tests then you will know. Yes, when you have more glusterfs replicas then it should increase throughput (net and disks).
08:33 samuel-et00_ joined #gluster
08:34 samuel-et00_ thanks abyss^
08:35 abyss^ of course it's network filesystem and works in user space... So tests should answer for the most of your questions. And wait till evening :)
08:42 elico joined #gluster
08:44 bjornar joined #gluster
08:47 tdasilva joined #gluster
08:54 tanuck joined #gluster
08:55 samuel-et00_ abyss^ : i don't have a working cluster to test this,but limitation of disk controller came to mind
08:57 hagarth joined #gluster
09:01 shubhendu joined #gluster
09:16 SteveCooling joined #gluster
09:22 ira joined #gluster
09:23 LordFolken interesting
09:24 ralala joined #gluster
09:28 shubhendu joined #gluster
09:30 brendon_ joined #gluster
09:36 overclk joined #gluster
09:46 gildub joined #gluster
09:46 gildub joined #gluster
10:02 glusterbot News from newglusterbugs: [Bug 1188184] Tracker bug :  NFS-Ganesha new features support for  3.7. <https://bugzilla.redhat.com/show_bug.cgi?id=1188184>
10:07 Philambdo joined #gluster
10:12 the-me joined #gluster
10:25 Norky joined #gluster
10:29 monotek joined #gluster
10:32 glusterbot News from newglusterbugs: [Bug 1188196] Change order of translators in brick <https://bugzilla.redhat.com/show_bug.cgi?id=1188196>
10:40 awerner joined #gluster
10:54 nbalacha joined #gluster
10:59 deniszh1 joined #gluster
11:00 spandit joined #gluster
11:00 soumya__ joined #gluster
11:02 Pupeno joined #gluster
11:07 liquidat joined #gluster
11:09 dusmant joined #gluster
11:14 kbyrne joined #gluster
11:33 Slashman_ joined #gluster
11:37 Philambdo joined #gluster
11:41 elico joined #gluster
11:41 quantum Hi all! I have 2 replication node, If first node fails, then second node cant mount gluster volume. How to fix it?
11:43 dusmant joined #gluster
11:43 kshlm joined #gluster
11:49 T3 joined #gluster
11:50 T3 joined #gluster
12:00 kanagaraj joined #gluster
12:00 T3 joined #gluster
12:00 atalur joined #gluster
12:02 glusterbot News from newglusterbugs: [Bug 1188242] Disperse volume: client crashed while running iozone <https://bugzilla.redhat.com/show_bug.cgi?id=1188242>
12:02 glusterbot News from resolvedglusterbugs: [Bug 867313] RFE: Reduce the initial RDMA protocol check log level from E to W or I <https://bugzilla.redhat.com/show_bug.cgi?id=867313>
12:03 diegows joined #gluster
12:07 t0ma joined #gluster
12:08 t0ma Hi, anyway to mount a specific directory on a volume? I don't want all the directories accessible on every server.
12:12 t0ma I want to handle shares with directory quotas but I cant find a way to mount specific subdirectories on a volume
12:14 itisravi_ joined #gluster
12:14 harish joined #gluster
12:15 B21956 joined #gluster
12:18 harish joined #gluster
12:19 awerner t0ma: it may be an option for you: you can mount subdirectories with nfs, so if you're not bound to use the fuse base client you can use nfs mount
12:19 t0ma awerner: thing is i want to use the fuse
12:21 t0ma even auth.allow makes no sense when using directories and fuse
12:21 t0ma gonna have to do lv's per brick and volume instead
12:23 t0ma i thought i could have one big volume and brick and then handle everything with directory quotas, but if the mountpoint will be at root level of the volume  then every server that uses the same volume will have access to all the directories
12:23 t0ma (one big volume with multiple bricks i mean)
12:25 awerner yes this is the concept behind it unfortunately
12:34 RameshN joined #gluster
12:36 plarsen joined #gluster
12:36 awerner joined #gluster
12:39 overclk joined #gluster
12:40 plarsen joined #gluster
12:44 LebedevRI joined #gluster
12:55 Gill joined #gluster
12:56 smohan_ joined #gluster
12:57 anoopcs joined #gluster
13:07 Manikandan joined #gluster
13:18 RameshN joined #gluster
13:23 sakshi joined #gluster
13:24 plarsen joined #gluster
13:30 lpabon joined #gluster
13:36 sakshi joined #gluster
13:43 ralala joined #gluster
13:46 DV joined #gluster
13:46 kanagaraj_ joined #gluster
13:48 bene joined #gluster
13:49 morse joined #gluster
13:51 RameshN joined #gluster
13:54 bjornar joined #gluster
13:57 chirino joined #gluster
13:58 _Bryan_ joined #gluster
13:59 meghanam joined #gluster
14:02 smohan_ joined #gluster
14:03 dgandhi joined #gluster
14:03 bene joined #gluster
14:11 bennyturns joined #gluster
14:19 anrao joined #gluster
14:19 bene2 joined #gluster
14:20 virusuy joined #gluster
14:20 virusuy joined #gluster
14:25 Slashman_ hello, I'm using glusterfs 3.6.2 and when I try to add a peer to an existing cluster, it is marked as "rejected": http://apaste.info/mBR when I try to restart the daemon on the new peer, it fails, any idea?
14:26 n-st joined #gluster
14:27 kanagaraj joined #gluster
14:28 jmarley joined #gluster
14:31 kanagaraj joined #gluster
14:33 kanagaraj joined #gluster
14:34 LordFolken Slashman_: I'm not an expect
14:34 LordFolken but I found this in google - http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
14:35 Philambdo joined #gluster
14:35 Slashman_ LordFolken: thx, will try this
14:37 LordFolken expect=expert
14:37 kanagaraj joined #gluster
14:37 Slashman_ LordFolken: hm... new state "State: Accepted peer request (Connected)"
14:38 Slashman_ never saw this
14:38 Slashman_ it's not in cluster
14:41 jamesSRV joined #gluster
14:41 kanagaraj joined #gluster
14:42 Slashman_ LordFolken: okay, restarting the daemon on the new host solved the issue, thanks
14:42 rafi joined #gluster
14:44 meghanam joined #gluster
14:45 kanagaraj joined #gluster
14:46 Slashman_ oh man, I have the same issue on every peer I'm adding :/
15:05 lmickh joined #gluster
15:15 jobewan joined #gluster
15:25 Folken__ joined #gluster
15:32 wushudoin joined #gluster
15:32 rcampbel3 joined #gluster
15:36 nage joined #gluster
15:38 siel joined #gluster
15:48 T0aD joined #gluster
15:51 shubhendu joined #gluster
15:53 siel joined #gluster
16:04 jbrooks joined #gluster
16:05 calisto joined #gluster
16:06 DV joined #gluster
16:16 siel joined #gluster
16:17 Slashman_ is it normal to have different checksum between host in a cluster for the files in /var/lib/glusterd/vols/VOLNAME ?
16:18 Slashman_ I have a cluster with 6 different hosts, 2 are on debian wheezy, 4 are on centos6, all centos have the same checksum for *.vol file and all debian have a different checksum for those
16:19 Slashman_ same gluster version everywhere (3.6.2-1)
16:23 shubhendu joined #gluster
16:33 jbrooks joined #gluster
16:36 Pupeno_ joined #gluster
16:48 siel joined #gluster
16:49 _polto_ joined #gluster
16:51 _polto_ hello, I am trying to understand how "stripe 4 replica 2" would work if I have 4 servers...
16:51 _polto_ and is the order I pass the bricks count in this case...
16:54 gem joined #gluster
16:54 deniszh joined #gluster
16:57 anoopcs joined #gluster
16:59 JoeJulian @brick order
16:59 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
17:00 rcampbel3 joined #gluster
17:00 JoeJulian Oh, also
17:00 JoeJulian @stripe
17:00 glusterbot JoeJulian: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
17:01 Philambdo joined #gluster
17:05 _polto_ JoeJulian: thanks. but what if I have stripe 4 replica 2? it will stripe 1234 , 5678 and do a replica between them ?
17:05 _polto_ I am reading the "should I..."
17:05 rcampbel3 joined #gluster
17:08 _polto_ @distribute
17:09 JoeJulian _polto_: I just checked. It's replica first, then stripe on top.
17:10 _polto_ JoeJulian: thanks
17:11 _polto_ JoeJulian: I have 4 servers, all 4 provide discs and all 4 do processing ontop. files are from 100k to 500Mb
17:12 _polto_ do you think I would benefit from having stripe + replica ?
17:12 _polto_ instead of distributed ?
17:12 JoeJulian Usually no.
17:13 _polto_ JoeJulian: in what cases should it change the performances ?
17:13 _polto_ I have a really havy load on the fs
17:13 JoeJulian I haven't ever seen a test case where stripe improves performance.
17:13 _polto_ wow
17:14 JoeJulian That's not to say that it couldn't, I just haven't seen anybody publish one.
17:14 siel joined #gluster
17:15 _polto_ I try..
17:16 PeterA joined #gluster
17:16 Slashman_ hello, it seems that I have an issue linked with gluster update from 3.4 to 3.6 : my 6 members of the cluster are fine, but I cannot add any more server to the cluster : the data in the files /var/lib/glusterd/vols/appshare/info and /var/lib/glusterd/vols/appshare/node_state.info contains additional lines on my new server after a peer probe, seems like those lines were added on glusterfs 3.6, but since my cluster was upgraded from
17:16 Slashman_ 3.4, those lines are not present on the old members
17:17 Slashman_ this seems to cause a checksum error, any idea how I can solve this?
17:17 Slashman_ this doesn't work: http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
17:19 JoeJulian @upgrade
17:19 glusterbot JoeJulian: I do not know about 'upgrade', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
17:19 JoeJulian @3.4 upgrade notes
17:19 glusterbot JoeJulian: http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
17:19 JoeJulian hmm
17:19 JoeJulian I thought it was in there...
17:19 Slashman_ http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6 ?
17:20 Slashman_ nothing about that here
17:20 JoeJulian Slashman_: Try running "glusterd --xlator-option *.upgrade=on -N" on the 3.4 upgraded servers.
17:21 JoeJulian If that doesn't work, just rsync the vols tree from the newer servers.
17:21 Slashman_ is this a risky move? it's in prod atm
17:21 JoeJulian Not at all.
17:21 _polto_ JoeJulian: I have 4 servers, 40 disks, let's say 10 per server. if I use replica 2 stripe 4 is it still doing distribution with other bricks ?
17:21 JoeJulian No
17:22 JoeJulian You would have to add bricks in a multiple of 8.
17:23 JoeJulian _polto_: But if you took those 40 disks and put 2 partitions on them, you would then have 80 bricks.
17:23 _polto_ I can not add x "striped & replicated" and y "just replicated" to the same volume ?
17:23 JoeJulian Nope.
17:23 Slashman_ JoeJulian: I don't see any output, could you tell me what it does? I don't see the option on the man page
17:23 _polto_ ok
17:23 deniszh1 joined #gluster
17:24 JoeJulian Slashman_: That /should/ update the vols files to add any changes needed for the newer version.
17:24 JoeJulian It would have happened as part of an rpm update, but I don't know if you're using any other distro.
17:26 Slashman_ JoeJulian: I have 4 centos6 and 2 debian wheezy in the cluster
17:26 JoeJulian interesting
17:28 Slashman_ there are some differences on the /var/lib/glusterd/vols/VOLNAME/*.vol files between centos and debian
17:29 JoeJulian I blame semiosis. ;)
17:30 semiosis whatever :-P
17:31 Slashman_ diff between debian and centos on the vol file: (same brick, same version of glusterfs) http://apaste.info/d7C
17:31 unsignedmark joined #gluster
17:32 unsignedmark Hi! I'm going to replace my two Gluster nodes with new ones, and want to install 3.5 on these. Can i safely "replace-brick" from v3.4.2 to v.3.5? Any pointers greatly appreciated!
17:33 JoeJulian Slashman_: Just copy from the newer /var/lib/glusterd/vols tree to the older.
17:33 semiosis JoeJulian: hang on
17:34 Slashman_ JoeJulian: from centos to debian ?
17:34 semiosis JoeJulian: barrier... isn't that a feature of the underlying brick fs?  maybe enabling that xlator on a system that doesnt support barriers will be bad
17:34 semiosis perhaps glusterfs *meant* to generate different volfiles for these bricks, due to differences on the servers
17:34 JoeJulian ... that would be horrible.
17:34 semiosis i've never seen this before, but just hypothesizing
17:34 JoeJulian Me neither.
17:35 semiosis there's nothing in the packaging for debian that would explain this
17:35 Slashman_ I ran the xlator command already
17:35 semiosis Slashman_: do you have a mix of xfs & ext filesystems on your bricks?  or are they all the same?
17:36 JoeJulian Looks like barrier is part of nsr.
17:36 Slashman_ xfs only
17:36 semiosis hrm
17:37 JoeJulian ### Barrier translator
17:37 JoeJulian The barrier translator allows file operations to be temporarily 'paused' on GlusterFS bricks, which is needed for performing consistent snapshots of a GlusterFS volume.
17:37 JoeJulian For more information, see [here] (http://www.gluster.org/community/documentation/index.php/Features/Server-side_Quiesce_feature).
17:37 JoeJulian So it should be safe.
17:37 semiosis snapshots, not NSR
17:37 JoeJulian I saw a lot of barrier in NSR, too, but it's a different barrier.
17:37 semiosis http://xfs.org/index.php/XFS_FAQ#Write_barrier_support.
17:38 Slashman_ hold on, I don't like the "file operations to be temporarily 'paused' on GlusterFS bricks", did I just stop some operations on my cluster?
17:38 semiosis Slashman_: can you check the system log (dmesg, probably) to see if XFS said anything about enabling barriers (or failure to do so)?
17:39 JoeJulian It's not related to the hardware at all.
17:39 semiosis oh?
17:39 psilvao_ joined #gluster
17:39 semiosis then why are the brick vols different?
17:39 JoeJulian It's a translator used for snapshotting.
17:39 Slashman_ looking
17:39 JoeJulian Oh, you missed that part. He upgraded several servers from 3.4 *and* added new servers starting on 3.6.
17:39 rjoseph joined #gluster
17:40 psilvao_ Hi Guys!
17:40 semiosis JoeJulian: oh ok, that explains it, thanks.  mystery solved :)
17:40 JoeJulian barrier came with 3.6 so I suspect it created new vol files where 3.4 didn't.
17:40 psilvao_ it's possible obligate to glusterd works only with ipv4?
17:40 JoeJulian Ok, gotta run out for a while. bbl.
17:40 t0ma left #gluster
17:41 semiosis unsignedmark: do you mean you want to move the block devices (bricks) to new servers, or upgrade your existing servers?
17:41 psilvao_ JoeJulian!, hii! a fast question it's possible glusterd works only ipv4?
17:42 Slashman_ nothing about barrier in the syslog/dmesg
17:44 Slashman_ semiosis: any idea what I should do? to sum it up, I have upgraded 6 servers from 3.4 to 3.6 and everything works fine, but I cannot add any new servers to the cluster now "State: Peer Rejected (Connected)"
17:45 semiosis ,,(peer rejected)
17:45 glusterbot I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
17:45 semiosis ,,(peer-rejected)
17:45 glusterbot http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
17:45 Slashman_ semiosis: already tried that
17:46 semiosis keep trying?
17:46 Slashman_ semiosis: done several time
17:46 semiosis hah ok
17:47 unsignedmark semiosis: I'm going to move all data to new servers, and figured doing a "replace-brick" would be the best way, simply replacing to the new server. Just installed 3.5.3, but turns out I can't even "peer probe" the old 3.4.2 node :P Maybe only 3.5 is compatible to 3.4.2?
17:47 Slashman_ I think that the issue is the files /var/lib/glusterd/vols/VOLNAME/info + cksum + node_state.info are different on the new server
17:47 Slashman_ they are the same on the 6 members of the current cluster
17:48 Slashman_ but when I add the new server, it have new lines
17:48 semiosis unsignedmark: you could just start fresh with new servers, move the existing block devices over, then create a new volume with your existing bricks.  just be sure to put your bricks in the same order as before!  and follow these instructions to resolve the path or a prefix of it problem...
17:48 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
17:50 unsignedmark semiosis: Thanks a bunch for that! I wanted to do the "replace-brick" because of two reasons, one that I need to move the data to new physical disks, and two, because I wanted to avoid downtime, or at least keep it very short. If I had to bring everything down, I'd be looking at several hours to copy everything over and restore volumes.
17:50 semiosis ,,(3.5 upgrade notes)
17:50 glusterbot I do not know about '3.5 upgrade notes', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
17:51 semiosis shoot
17:51 semiosis i dont know about the 3.4 to 3.5 upgrade, but generally speaking you could upgrade all your servers first, then all your clients.  note that clients must go down to upgrade, so whether you can do this without interrupting your apps is up to you
17:52 semiosis gotta run, bbl
17:53 Slashman_ :x
17:53 Slashman_ bye
17:53 Pupeno joined #gluster
17:53 unsignedmark semiosis: yeah, I'd probably want to close down all services for a short time no matter what, but I hope I can keep it to minutes instead of hours ;) I think I will install 3.4.2 on the new servers, "replace-brick" everything over, and then upgrade the new servers when all data is moved :)
17:53 unsignedmark semiosis: Thanks for the pointers!
17:53 Pupeno joined #gluster
17:56 masterfix joined #gluster
17:58 masterfix hi, is there a "volume rename" method in gluster 3.5?
18:01 Rapture joined #gluster
18:06 psilvao_ Hi again, tehereis some option for works only with ipv4?
18:06 psilvao_ i have a big problem in centos7
18:07 psilvao_ it's impossible works as service
18:07 MacWinner joined #gluster
18:08 psilvao_ because glusterd do a dns resolve, don't use /etc/hosts
18:08 psilvao_ :(
18:08 psilvao_ reading bugs
18:08 psilvao_ i found this .. https://bugzilla.redhat.com/show_bug.cgi?id=1117886
18:08 glusterbot Bug 1117886: unspecified, unspecified, ---, anders.blomdell, POST , Gluster not resolving hosts with IPv6 only lookups
18:09 psilvao_ that's my problem common-utils.c:125:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Name or service not known
18:09 psilvao_ why dont use ipv4?
18:09 psilvao_ use only ipv6 for to resolve
18:40 masterfix left #gluster
18:44 jmarley joined #gluster
18:48 _dist joined #gluster
18:50 Gill joined #gluster
18:56 T0aD joined #gluster
19:02 maveric_amitc_ joined #gluster
19:03 siel joined #gluster
19:07 jackdpeterson joined #gluster
19:08 DV joined #gluster
19:10 sputnik13 joined #gluster
19:15 Ramereth joined #gluster
19:19 sputnik13 joined #gluster
19:19 bennyturns joined #gluster
19:23 ira joined #gluster
19:28 jmarley joined #gluster
19:29 semiosis unsignedmark: yw!
19:44 sputnik13 joined #gluster
19:47 Ramereth joined #gluster
19:53 sputnik13 joined #gluster
19:56 psilvao_ solved
19:56 psilvao_ the problem was a race condition from systemctl start gluster.service
19:56 psilvao_ i found when i run into rc.local with a sleep for 40 sec all wrks ok
20:00 sputnik13 joined #gluster
20:00 hagarth joined #gluster
20:25 lmickh joined #gluster
20:32 n-st joined #gluster
20:33 marcoceppi_ joined #gluster
20:47 _polto_ joined #gluster
20:52 siel joined #gluster
20:58 redbeard joined #gluster
21:01 nage joined #gluster
21:15 siel joined #gluster
21:18 diegows joined #gluster
21:41 Pupeno joined #gluster
22:02 mkzero joined #gluster
22:15 doo joined #gluster
22:19 bene2 joined #gluster
22:19 Pupeno joined #gluster
22:48 squizzi joined #gluster
23:03 gildub joined #gluster
23:04 mikedep333 joined #gluster
23:10 Rapture joined #gluster
23:11 Pupeno_ joined #gluster
23:19 T3 joined #gluster
23:31 javi404 joined #gluster
23:39 sadbox joined #gluster
23:46 javi404 joined #gluster
23:54 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary