Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:35 kukulogy joined #gluster
00:51 shdeng joined #gluster
01:05 julim joined #gluster
01:33 Lee1092 joined #gluster
01:38 aj__ joined #gluster
01:40 harish_ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 wadeholler joined #gluster
02:26 kshlm joined #gluster
02:30 newdave joined #gluster
02:59 kukulogy joined #gluster
03:15 bkunal joined #gluster
03:23 nishanth joined #gluster
03:37 Wizek joined #gluster
03:39 magrawal joined #gluster
03:43 nhayashi joined #gluster
03:44 prasanth joined #gluster
03:59 atinm joined #gluster
04:03 itisravi joined #gluster
04:06 muneerse2 joined #gluster
04:09 Wizek joined #gluster
04:22 aravindavk joined #gluster
04:29 kdhananjay joined #gluster
04:30 RameshN joined #gluster
04:30 poornimag joined #gluster
04:33 rafi joined #gluster
04:35 rwheeler joined #gluster
04:40 raghug joined #gluster
04:50 shubhendu__ joined #gluster
04:51 johnmilton joined #gluster
04:54 ira joined #gluster
04:57 jiffin joined #gluster
05:04 Muthu_ joined #gluster
05:06 nbalacha joined #gluster
05:07 pur joined #gluster
05:07 nehar joined #gluster
05:22 ppai joined #gluster
05:32 rafi1 joined #gluster
05:32 karthik_ joined #gluster
05:34 msvbhat joined #gluster
05:36 aspandey joined #gluster
05:38 rafi joined #gluster
05:39 sanoj joined #gluster
05:39 Apeksha joined #gluster
05:41 itisravi joined #gluster
05:42 Muthu_ joined #gluster
05:44 aj__ joined #gluster
05:49 Manikandan joined #gluster
05:51 skoduri joined #gluster
05:51 ppai joined #gluster
05:52 level7 joined #gluster
05:55 ashiq joined #gluster
05:56 Bhaskarakiran joined #gluster
05:56 atalur joined #gluster
05:58 [diablo] joined #gluster
06:00 satya4ever joined #gluster
06:05 ashiq joined #gluster
06:09 ramky joined #gluster
06:18 Alghost joined #gluster
06:19 karnan joined #gluster
06:22 hackman joined #gluster
06:24 msvbhat joined #gluster
06:25 DV__ joined #gluster
06:28 devyani7_ joined #gluster
06:30 devyani7_ joined #gluster
06:31 jtux joined #gluster
06:32 hchiramm joined #gluster
06:36 Manikandan joined #gluster
06:43 ahino joined #gluster
06:46 spalai joined #gluster
06:47 rafi joined #gluster
06:47 RameshN joined #gluster
06:47 jtux joined #gluster
07:00 anil joined #gluster
07:06 mbukatov joined #gluster
07:13 spalai joined #gluster
07:14 ivan_rossi joined #gluster
07:14 ivan_rossi left #gluster
07:21 kovshenin joined #gluster
07:22 kukulogy_ joined #gluster
07:23 Philambdo joined #gluster
07:37 RameshN joined #gluster
07:39 aj__ joined #gluster
07:39 Siavash joined #gluster
07:45 ashiq joined #gluster
07:45 aspandey_ joined #gluster
07:46 rafi1 joined #gluster
07:47 itisravi_ joined #gluster
07:49 raghug joined #gluster
07:50 Slashman joined #gluster
07:50 level7 joined #gluster
07:51 Alghost joined #gluster
07:52 itisravi_ joined #gluster
07:56 fsimonce joined #gluster
08:01 somlin22 joined #gluster
08:04 ppai joined #gluster
08:30 raghug joined #gluster
08:35 kdhananjay joined #gluster
08:36 atalur joined #gluster
08:38 aspandey joined #gluster
08:39 aspandey joined #gluster
08:43 sanoj joined #gluster
08:53 muneerse joined #gluster
08:56 kotreshhr joined #gluster
08:58 spalai joined #gluster
09:02 jtux joined #gluster
09:14 arcolife joined #gluster
09:15 spalai joined #gluster
09:18 Alghost joined #gluster
09:22 somlin22 Hi All, 2 out of 6 bricks were in faulty state in geo-replication. I've delete the geo-replication session and stablished new session. The status is Active and AWL status is Changelog Crawl.
09:23 somlin22 But no data has replicated to slave nodes, am i missing a step if i need to re-create geo-replication session?
09:24 kdhananjay joined #gluster
09:26 somlin22 Do i need to turn off indexing to force re-sync?
09:27 aj__ joined #gluster
09:31 nishanth joined #gluster
09:38 anil joined #gluster
09:38 Alghost_ joined #gluster
09:40 aspandey joined #gluster
09:40 msvbhat joined #gluster
09:42 atinm joined #gluster
09:45 level7 joined #gluster
09:45 jwd joined #gluster
09:46 nbalacha joined #gluster
09:47 sanoj joined #gluster
09:54 Manikandan joined #gluster
09:55 sandersr joined #gluster
09:57 sanoj joined #gluster
10:05 devyani7_ joined #gluster
10:10 Slydder joined #gluster
10:11 nbalacha joined #gluster
10:17 Manikandan joined #gluster
10:18 jtux joined #gluster
10:24 Slydder I currently have 10 Dell servers (poweredge r410, r420 &  r515) and have a running glusterfs cluster. I need to get more speed out of the setup (currently 1-Gbit cards) and was thinking of going infiniband but have been hearing a lot about 100-Gbit cards lately. I need the fastest possible throughput. what would you guys suggest?
10:26 cloph is network really your limiting factor? Sure, those better network stuff offer lower latency, but are your disks bored?
10:27 Slydder I believe so. how would one best tell?
10:28 cloph iostat -mx 5 would be a quick check/look for iowait and iops
10:30 Slydder bored to death
10:31 Slydder 95% idle
10:31 Slydder I've been using iotop for a while on the gluster servers and it says about the same thing.
10:46 raghug joined #gluster
10:47 somlin22 Hi All, 2 out of 6 bricks were in faulty state in geo-replication. I've delete the geo-replication session and stablished new session. The status is Active and AWL status is Changelog Crawl. No data haas replicated to slave volume. Do i need to turn off indexing to force re-sync?
10:54 cloph yes, to force resync after it already had an initial sync run, you need to clear the indexing flag and should also reset the timestamp attribute.
10:54 Manikandan joined #gluster
10:54 cloph you have to delete the geo-replication link, then you can do gluster volume set name geo-replication.indexing off
10:55 somlin22 cloph: Thanks, is glusterfs indexing resource intensive? should i schedule this out side working hours?
10:56 cloph https://www.gluster.org/pipermail/g​luster-users/2015-June/022164.html
10:56 glusterbot Title: [Gluster-users] Geo-Replication - Changelog socket is not present - Falling back to xsync (at www.gluster.org)
10:57 cloph somlin22: it is taking time to create the initial batch of changes to process, but nothing that would bring down a server.
10:57 cloph If you have a huge number of files to process, then make sure you have enough disk-space for gluster to put its changelogs
10:57 cloph especially when the geo-replication link is slow and you have lots of change in your master volume
10:58 cloph (gluster volume geo-replication master slave config workdir /path/with/enough/storage
10:59 newdave joined #gluster
11:00 somlin22 cloph: Thanks. Can i turn back on geo-replication.indexing straight after i turned it off?
11:00 wadeholler joined #gluster
11:00 cloph once you reestablish the link (i.e. volume geo-replication create master slave), it will turn on automatically.
11:01 somlin22 cloph: Many thanks.
11:03 muneerse2 joined #gluster
11:15 Wizek joined #gluster
11:17 level7 joined #gluster
11:20 kukulogy joined #gluster
11:21 robb_nl joined #gluster
11:25 atinm joined #gluster
11:38 johnmilton joined #gluster
11:44 wadeholler joined #gluster
11:45 Wizek joined #gluster
11:45 ppai joined #gluster
11:50 atinm joined #gluster
11:53 anil joined #gluster
11:54 somlin22 joined #gluster
11:58 level7_ joined #gluster
12:02 wadeholler joined #gluster
12:03 dnunez joined #gluster
12:03 jiffin1 joined #gluster
12:11 aj__ joined #gluster
12:12 somlin22 joined #gluster
12:18 chirino_m joined #gluster
12:18 skoduri joined #gluster
12:20 kotreshhr left #gluster
12:24 jiffin joined #gluster
12:35 kovshenin joined #gluster
12:36 m0zes joined #gluster
12:40 atinm joined #gluster
12:42 lalatenduM joined #gluster
12:43 jiffin joined #gluster
12:51 plarsen joined #gluster
12:53 dnunez joined #gluster
12:57 karnan joined #gluster
12:58 ppai joined #gluster
13:04 Philambdo joined #gluster
13:07 somlin22 joined #gluster
13:07 wadeholler joined #gluster
13:08 Slydder I currently have 10 Dell servers (poweredge r410, r420 &  r515) and have a running glusterfs cluster. I need to get more speed out of the setup (currently 1-Gbit cards) and was thinking of going infiniband but have been hearing a lot about 100-Gbit cards lately. Just to clarify a bit. This is not a drive latency problem (the drives are ca. 95% idle) and I have high lookup latency. I need the fastest possible throughput. what would
13:15 Apeksha joined #gluster
13:17 theron joined #gluster
13:24 julim joined #gluster
13:24 m0zes joined #gluster
13:28 julim joined #gluster
13:34 skylar joined #gluster
13:43 karnan joined #gluster
13:45 m0zes joined #gluster
13:46 hchiramm joined #gluster
13:56 msvbhat joined #gluster
13:56 ben453 joined #gluster
14:00 skoduri joined #gluster
14:03 arcolife joined #gluster
14:03 aj__ joined #gluster
14:06 davidj joined #gluster
14:07 fyxim joined #gluster
14:10 skoduri joined #gluster
14:13 mpiet_cloud joined #gluster
14:15 scubacuda joined #gluster
14:15 Lee1092 joined #gluster
14:15 RustyB joined #gluster
14:17 Slydder I currently have 10 Dell servers (poweredge r410, r420 &  r515) and have a running glusterfs cluster. I need to get more speed out of the setup (currently 1-Gbit cards) and was thinking of going infiniband but have been hearing a lot about 100-Gbit cards lately. Just to clarify a bit. This is not a drive latency problem (the drives are ca. 95% idle) and I have high lookup latency. I need the fastest possible throughput. what would
14:17 lh joined #gluster
14:18 PotatoGim joined #gluster
14:18 AppStore joined #gluster
14:19 sc0 joined #gluster
14:21 Philambdo1 joined #gluster
14:22 JoeJulian Slydder: Infiniband is going to give the lowest network latency.
14:23 bowhunter joined #gluster
14:26 Slydder JoeJulian: that is what I'm thinking.
14:34 side_control speaking of infiniband
14:35 side_control i've been banging my head against it =/
14:36 side_control all weekend actually..
14:36 poornimag joined #gluster
14:36 side_control what is the dht-helper?
14:36 side_control can't make sense of these logs
14:36 side_control http://paste.fedoraproject.org/399375/
14:36 glusterbot Title: #399375 Fedora Project Pastebin (at paste.fedoraproject.org)
14:38 hwcomcn joined #gluster
14:39 hwcomcn joined #gluster
14:41 ndevos side_control: hmm, that looks very much as something I've seen before.... If I only could remember where that was :-/
14:42 ndevos and with gmane.org missing in action, I do not know where to search our mailinglists
14:42 side_control ndevos: im at my wits end, sadly...  its running, throughput is great, but its unstable
14:42 hwcomcn joined #gluster
14:43 ndevos side_control: maybe it was related to mixing different versions, but I really am not sure
14:43 side_control ndevos: no, versions are the same
14:44 side_control i think it might have to do with distributed bricks
14:45 side_control i added another brick to a single brick volume and had tons of errors
14:52 hwcomcn_ joined #gluster
14:53 ndevos side_control: I'm looking at bugs on http://bugs.cloud.gluster.org/ , but can not really find something suitable
14:53 glusterbot Title: Gluster Bugs (at bugs.cloud.gluster.org)
14:53 ndevos side_control: maybe you can file a bug for that?
14:53 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:53 side_control ndevos: yea, still need to find a work around
14:54 cloph http://www.mail-archive.com​/gluster-users@gluster.org/
14:54 glusterbot Title: gluster-users (at www.mail-archive.com)
14:55 side_control ndevos: first though, can't get to my git repo for work
14:57 side_control JoeJulian: ping, you do a lot of infiniband stuff?
14:59 side_control cloph: thank you
14:59 ndevos cloph++ thanks!
14:59 glusterbot ndevos: cloph's karma is now 1
14:59 ndevos side_control: http://www.mail-archive.com/gluster​-users%40gluster.org/msg25276.html seems to be about your problem :)
14:59 glusterbot Title: Re: [Gluster-users] Fedora upgrade to f24 installed 3.8.0 client and broke mounting (at www.mail-archive.com)
15:01 side_control hah, gmu, i went there
15:02 johnmilton hello,i have a problem with self-healing operations...I'm getting a message some of my bricks are down, but they show as online
15:02 cloph (fyi: mail-archive.com has support for keyboard navigation vim-style and n & p for example)
15:02 wushudoin joined #gluster
15:03 side_control johnmilton: how are you checking? gluster volume status, and you see the pid for each server that contains a brick?
15:03 wushudoin joined #gluster
15:06 side_control ndevos: tbh, it certainly can be one of the issues, but im running into a lot of other problems as well, i need a break
15:09 aravindavk joined #gluster
15:11 shyam joined #gluster
15:12 level7 joined #gluster
15:13 arcolife joined #gluster
15:15 Hamburglr joined #gluster
15:19 shaunm joined #gluster
15:24 jiffin joined #gluster
15:34 bkolden joined #gluster
15:39 somlin22 joined #gluster
15:50 shyam joined #gluster
16:00 jiffin joined #gluster
16:08 nbalacha joined #gluster
16:08 jiffin joined #gluster
16:10 theron joined #gluster
16:19 hackman joined #gluster
16:23 theron joined #gluster
16:24 shyam joined #gluster
16:30 level7_ joined #gluster
16:30 somlin22 joined #gluster
16:30 theron joined #gluster
16:36 julim joined #gluster
16:46 cloph a question on removing bricks, reducing replica count: as you have to specify force and it shows the warning about this operation might cause data loss:  under which circumstances would that actually cause data-loss? I assume it will make sure any pending files are replicated and then shuts down the brick.
16:47 post-factum cloph: in replica "force" shouldn't lead to data loss
16:47 cloph thx for confirming
16:48 post-factum cloph: at least, that never happened to me, and another behavior should be considered as bug
16:50 bluenemo joined #gluster
16:52 hagarth joined #gluster
16:53 rafi joined #gluster
16:54 rafi joined #gluster
17:03 johnmilton side_control: yes, sorry
17:07 bowhunter joined #gluster
17:22 bwerthmann joined #gluster
17:37 ahino joined #gluster
17:47 RameshN joined #gluster
17:47 shyam joined #gluster
17:55 gpeterso joined #gluster
17:59 hagarth1 joined #gluster
18:06 bkolden joined #gluster
18:08 ahino joined #gluster
18:10 julim joined #gluster
18:13 RameshN_ joined #gluster
18:21 bowhunter joined #gluster
18:26 side_control any ideas? trying to mount glusterfs -o rdma  on a gluster server
18:26 side_control https://paste.fedoraproject.org/399457/
18:26 glusterbot Title: #399457 Fedora Project Pastebin (at paste.fedoraproject.org)
18:26 side_control seems to work fine as long as its not a server that contains bricks
18:35 theron joined #gluster
18:36 theron joined #gluster
18:36 side_control then there is this
18:36 side_control https://paste.fedoraproject.org/399464/
18:36 glusterbot Title: #399464 Fedora Project Pastebin (at paste.fedoraproject.org)
18:36 side_control which stalled and hiccupped
18:37 ahino joined #gluster
18:40 B21956 joined #gluster
18:41 shyam joined #gluster
18:44 jbrooks joined #gluster
18:48 hagarth joined #gluster
18:58 johnmilton joined #gluster
18:58 somlin22 joined #gluster
19:16 johnmilton joined #gluster
19:17 theron joined #gluster
19:19 ahino joined #gluster
19:21 jockek joined #gluster
19:40 devyani7 joined #gluster
19:41 Manikandan joined #gluster
19:41 hagarth joined #gluster
19:44 guhcampos joined #gluster
20:03 level7 joined #gluster
20:08 somlin22 joined #gluster
20:13 msvbhat joined #gluster
20:27 theron joined #gluster
20:30 aj__ joined #gluster
20:36 bwerthmann joined #gluster
20:37 bowhunter joined #gluster
20:44 andresmoya Hi, I have a brick in a replicate volume that the filsystem has become corrupt, the os and gluster isntall is in good shape, just the mount points for the bricks I want to wipe out the file system, re create and then have the data sync from one of the good nodes, are there instructions to do this anywhere?
21:20 suliba joined #gluster
21:24 andresmoya anyone there
21:28 andresmoya left #gluster
21:56 bwerthma1n joined #gluster
22:12 sadbox joined #gluster
22:13 theron joined #gluster
22:46 level7_ joined #gluster
22:54 kukulogy joined #gluster
23:27 d0nn1e joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary