Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 cleong joined #gluster
00:13 lpabon joined #gluster
00:14 jdossey joined #gluster
00:27 fyxim joined #gluster
00:39 plarsen joined #gluster
01:16 jmarley joined #gluster
01:22 samsaffron___ joined #gluster
01:24 spcmastertim joined #gluster
01:24 tquinn joined #gluster
01:29 Lee1092 joined #gluster
01:48 autoditac joined #gluster
01:52 rafi joined #gluster
01:56 calisto joined #gluster
02:00 billputer joined #gluster
02:04 harish joined #gluster
02:08 sankarshan joined #gluster
02:31 ira joined #gluster
02:35 calisto joined #gluster
03:01 hagarth joined #gluster
03:12 TheSeven joined #gluster
03:14 wushudoin| joined #gluster
03:19 wushudoin| joined #gluster
03:31 jobewan joined #gluster
03:36 kdhananjay joined #gluster
03:36 nangthang joined #gluster
03:36 schandra joined #gluster
03:36 lpabon joined #gluster
03:41 atinm joined #gluster
03:47 aravindavk joined #gluster
03:53 sripathi joined #gluster
04:01 itisravi joined #gluster
04:04 kanagaraj joined #gluster
04:04 shubhendu joined #gluster
04:05 kotreshhr joined #gluster
04:11 ppai joined #gluster
04:13 calisto joined #gluster
04:26 LebedevRI joined #gluster
04:27 RameshN joined #gluster
04:44 nbalacha joined #gluster
04:47 vmallika joined #gluster
04:54 ndarshan joined #gluster
04:57 jiffin joined #gluster
05:04 sahina joined #gluster
05:06 pppp joined #gluster
05:13 gem joined #gluster
05:14 sakshi joined #gluster
05:14 DV joined #gluster
05:18 Manikandan joined #gluster
05:20 hgowtham joined #gluster
05:20 vimal joined #gluster
05:20 ashiq joined #gluster
05:22 anil joined #gluster
05:32 deepakcs joined #gluster
05:33 arcolife joined #gluster
05:36 aravindavk joined #gluster
05:37 JoeJulian @paste
05:37 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
05:39 JoeJulian hagarth: Are you around? I'm trying to figure out what's wrong with this build. http://termbin.com/mays
05:41 hagarth JoeJulian: is this with gcc5 on a new distro?
05:42 sakshi joined #gluster
05:43 JoeJulian yep
05:44 hgowtham joined #gluster
05:44 hagarth JoeJulian: would need to change inline to "static inline"  or similar in src
05:44 hagarth JoeJulian: https://www.gluster.org/pipermail/g​luster-devel/2015-June/045942.html .. a recent thread on gluster-devel
05:45 JoeJulian cool.
05:46 maveric_amitc_ joined #gluster
05:48 jiffin JoeJulian: the more detailed explanation can be seen  in this threadtoo  https://www.gluster.org/pipermail/g​luster-devel/2015-July/046222.html
05:56 jdossey joined #gluster
05:56 SOLDIERz joined #gluster
05:56 autoditac joined #gluster
06:00 frostyfrog joined #gluster
06:00 frostyfrog joined #gluster
06:03 atalur joined #gluster
06:04 raghu joined #gluster
06:05 overclk joined #gluster
06:05 kdhananjay joined #gluster
06:16 jtux joined #gluster
06:20 PatNarciso joined #gluster
06:21 mbukatov joined #gluster
06:21 spalai joined #gluster
06:21 spalai left #gluster
06:23 jwd joined #gluster
06:24 hgowtham joined #gluster
06:25 Saravana_ joined #gluster
06:25 vimal joined #gluster
06:35 nishanth joined #gluster
06:45 rjoseph joined #gluster
06:45 ppai joined #gluster
06:51 spalai joined #gluster
06:51 spalai left #gluster
07:01 skoduri joined #gluster
07:05 vincent_vdk joined #gluster
07:08 vincent_vdk joined #gluster
07:10 vincent_vdk joined #gluster
07:10 pcaruana joined #gluster
07:11 vincent_vdk joined #gluster
07:11 dusmant joined #gluster
07:11 ppai joined #gluster
07:12 vincent_vdk joined #gluster
07:14 JoeJulian hagarth, jiffin: Thanks. Got it working with: find -type f -name '*.c' -exec sed -i -e '/static/!s/^inline/extern inline/' {} \;
07:16 JoeJulian Probably overkill, but I only need it working long enough to sync the replicas, after that I'll upgrade to 3.7.
07:20 [Enrico] joined #gluster
07:21 Manikandan joined #gluster
07:23 TrincaTwik joined #gluster
07:23 rafi joined #gluster
07:33 vincent_vdk joined #gluster
07:35 vincent_vdk joined #gluster
07:35 anrao joined #gluster
07:38 aravindavk joined #gluster
07:45 Manikandan joined #gluster
07:48 vincent_vdk joined #gluster
07:58 sripathi1 joined #gluster
08:19 uebera|| joined #gluster
08:22 smohan joined #gluster
08:26 nsoffer joined #gluster
08:40 dusmant joined #gluster
08:40 glusterbot News from newglusterbugs: [Bug 1022759] subvols-per-directory floods client logs with "disk layout missing" messages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1022759>
08:40 rjoseph joined #gluster
08:50 sakshi joined #gluster
08:52 vmallika joined #gluster
08:56 Debloper joined #gluster
08:56 jwd joined #gluster
08:59 jwaibel joined #gluster
08:59 sakshi joined #gluster
09:01 jcastill1 joined #gluster
09:06 jcastillo joined #gluster
09:06 jmarley joined #gluster
09:08 rjoseph joined #gluster
09:10 Philambdo joined #gluster
09:16 yazhini joined #gluster
09:23 elico joined #gluster
09:23 ramky joined #gluster
09:26 s19n joined #gluster
09:27 s19n Hi all. I have three Gluster servers with 4x1GB ethernet interfaces each, slaves of a bond0 in balance-alb mode
09:28 dusmant joined #gluster
09:29 s19n I have never sees more than 50-60 MB/s used on every server, even while performing self-heal
09:30 s19n what could I check to verify there are no bottlenecks?
09:32 ndarshan joined #gluster
09:35 nishanth joined #gluster
09:38 sahina joined #gluster
09:43 pjschmitt joined #gluster
09:45 ctria joined #gluster
09:45 Philambdo joined #gluster
09:48 Philambdo joined #gluster
09:49 jdossey joined #gluster
09:54 jmarley joined #gluster
09:55 jdossey joined #gluster
09:56 sahina joined #gluster
10:00 cleong joined #gluster
10:11 rjoseph joined #gluster
10:14 Slashman joined #gluster
10:17 karnan joined #gluster
10:41 glusterbot News from newglusterbugs: [Bug 1246432] ./tests/basic/volume-snapshot.t  spurious fail causing glusterd crash. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1246432>
10:43 arcolife joined #gluster
10:45 ndevos s19n: I'm not completely sure how bonding works, but I thought you need multiple tcp connections to really get any benefit from it
10:46 ndevos s19n: that would mean, you could see a performance increase when you have multiple mountpoints and use them simultanously
10:46 deniszh joined #gluster
10:47 ndevos a single self-heal would use one tcp connection to each brick, and that can probably not be load-balanced through the bond
10:48 s19n ndevos, I knew that, and in fact I have four bricks per server
10:50 ndevos s19n: ah, good, and you have a multi-threaded/process workload too? otherwise single-thread/process would use only one brick at the time
10:50 s19n what I am seeing now is less than a single Gbit channel anyway (50 MB/s should be somewhere near 400 Mbit/s?)
10:51 ndevos is that only with self-heal, or also with normal workloads?
10:51 ndevos self-heal has a throttling mechanism, I think
10:52 s19n also with normal workloads; I'd expect that during a self-heal I'd have seen more network traffic, and instead it's somewhat the same as usual
10:52 s19n throttling, ah, interesting
10:53 s19n is that anything I could look at via glusterfs options?
10:53 s19n (I am currently going from replica 2 to replica 3)
10:58 s19n gah, seen the following in one of the clients:
10:58 s19n E [dht-helper.c:1428:dht_unlock_inodelk] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.7/xla​tor/cluster/replicate.so(afr_rename_unwind+0x11a) [0x7f3a7cbc2aaa] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.7/xlat​or/cluster/distribute.so(dht_rename_dir_cbk+0xb8) [0x7f3a7c97e8c8] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.7/xla​tor/cluster/distribute.so(dht_rename_unlock+0x70) [0x7f3a7c97e720]))) 1-volname-dht: invalid argument: lk_array
10:58 glusterbot s19n: ('s karma is now -95
10:58 glusterbot s19n: ('s karma is now -96
10:58 glusterbot s19n: ('s karma is now -97
10:59 s19n not related with what we were discussing, but I'd like to ask a comment for it as well...
11:01 ndevos maybe cluster.self-heal-window-size? but itisravi or one of the other devs working on replication would know more about it
11:03 ajames-41678 joined #gluster
11:03 anrao joined #gluster
11:04 ndevos s19n: some of the people on gluster-users@gluster.org use bonding, you could write to the list, explain your setup and ask for ideas
11:04 itisravi s19n: what kind of files are there on the volume? normal files or vm images?
11:06 jcastill1 joined #gluster
11:07 TrincaTwik joined #gluster
11:18 ira_ joined #gluster
11:22 jcastillo joined #gluster
11:26 gem_ joined #gluster
11:27 jcastill1 joined #gluster
11:28 ndarshan joined #gluster
11:37 jcastillo joined #gluster
11:44 s19n normal files, from a few KB to several GBs
11:44 s19n they are generally written only once
11:47 vincent_vdk joined #gluster
11:54 akay1 does anyone know if the trashcan bug will be fixed in 3.7.3? https://bugzilla.redhat.co​m/show_bug.cgi?id=1237375
11:54 glusterbot Bug 1237375: medium, urgent, ---, achiraya, ASSIGNED , Trashcan broken on Distribute-Replicate volume
11:55 TrincaTwik joined #gluster
11:57 frankS2 joined #gluster
12:08 Pintomatic joined #gluster
12:13 rjoseph joined #gluster
12:14 jtux joined #gluster
12:19 hagarth joined #gluster
12:21 unclemarc joined #gluster
12:23 lezo joined #gluster
12:24 jrm16020 joined #gluster
12:25 ekuric joined #gluster
12:27 tquinn joined #gluster
12:32 rjoseph joined #gluster
12:41 glusterbot News from newglusterbugs: [Bug 1246481] rpc: fix binding brick issue while bind-insecure is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1246481>
12:55 firemanxbr joined #gluster
13:05 anrao joined #gluster
13:06 lyang0 joined #gluster
13:12 mpietersen joined #gluster
13:13 B21956 joined #gluster
13:14 gem_ joined #gluster
13:17 julim joined #gluster
13:21 anrao joined #gluster
13:25 georgeh-LT2 joined #gluster
13:27 jmarley joined #gluster
13:29 overclk joined #gluster
13:30 squizzi joined #gluster
13:30 calisto joined #gluster
13:32 theusualsuspect joined #gluster
13:37 theusualsuspect left #gluster
13:39 ctria Hello... Is there any good article describing when it is better to use native client and when it is better to use NFS export? I recall there was one that I can't find at the moment (at least in RH knowledgebase)
13:41 tquinn joined #gluster
13:47 plarsen joined #gluster
13:51 jdossey joined #gluster
13:52 rafi joined #gluster
14:01 rafi joined #gluster
14:03 msvbhat ctria: I believe that depends on your workload/usecase
14:04 ctria msvbhat, yes it does. That's why I'm asking on what workload is better for one case and what for the other
14:04 msvbhat ctria: AFAIK if you have lot of small files you can use nfs mount. ndevos, please correct me if I'm wrong
14:04 ctria i.e. i remember exactly that (that NFS is better for small files)
14:05 msvbhat I don't see bennyturns online. He might have something more to say
14:06 theron joined #gluster
14:09 bennyturns joined #gluster
14:11 RedW joined #gluster
14:13 shyam joined #gluster
14:13 cyberswat joined #gluster
14:23 leucos joined #gluster
14:24 shubhendu joined #gluster
14:26 rafi joined #gluster
14:39 unclemarc joined #gluster
14:41 dusmant joined #gluster
14:41 rafi joined #gluster
14:42 glusterbot News from newglusterbugs: [Bug 1246543] Issue with restarting geo-replication crashing data replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1246543>
14:42 glusterbot News from newglusterbugs: [Bug 1246544] Issue with restarting geo-replication crashing data replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1246544>
14:44 _Bryan_ joined #gluster
14:57 rafi joined #gluster
15:00 harish joined #gluster
15:01 elico joined #gluster
15:08 spalai joined #gluster
15:08 spalai left #gluster
15:12 ajames-41678 joined #gluster
15:16 glusterbot News from resolvedglusterbugs: [Bug 1246543] Issue with restarting geo-replication crashing data replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1246543>
15:26 sahina joined #gluster
15:27 jiffin joined #gluster
15:33 spcmastertim joined #gluster
15:36 anil joined #gluster
15:38 cholcombe joined #gluster
15:40 calisto joined #gluster
15:47 skoduri joined #gluster
16:01 harish joined #gluster
16:01 kovshenin joined #gluster
16:01 theron_ joined #gluster
16:03 nsoffer joined #gluster
16:10 TrincaTwik joined #gluster
16:18 theron joined #gluster
16:19 cyberswa_ joined #gluster
16:25 cyberswat joined #gluster
16:26 calavera joined #gluster
16:26 theron_ joined #gluster
16:27 togdon joined #gluster
16:28 cyberswat joined #gluster
16:35 cyberswa_ joined #gluster
16:41 cyberswat joined #gluster
16:48 haomaiwa_ joined #gluster
16:52 cyberswa_ joined #gluster
16:53 theron joined #gluster
17:02 _maserati joined #gluster
17:12 cyberswat joined #gluster
17:14 pppp joined #gluster
17:15 calavera_ joined #gluster
17:18 matclayton joined #gluster
17:18 firemanxbr joined #gluster
17:18 matclayton Are there any good tutorials on how to recover from a dead node?
17:21 calavera joined #gluster
17:25 calavera_ joined #gluster
17:28 calavera joined #gluster
17:29 Rapture joined #gluster
17:32 calavera joined #gluster
17:39 TrincaTwik joined #gluster
17:41 bennyturns joined #gluster
17:42 _maserati mator, "gluster replace-brick server2:brick2 server5:brickn commit force"  Obviously lookup the documentation on "replace-brick" first
17:42 _maserati sorry not mator
17:43 _maserati matclayton, "gluster replace-brick server2:brick2 server5:brickn commit force"  Obviously lookup the documentation on "replace-brick" first
17:43 calavera joined #gluster
17:47 calavera joined #gluster
17:52 jiffin1 joined #gluster
17:54 JoeJulian That also depends on what kind of node died. If it's a client node, that's not a problem. If it's a printer node, even less.
18:09 jdossey joined #gluster
18:13 SOLDIERz joined #gluster
18:18 anrao joined #gluster
18:37 _maserati <3 you all, have a good weekend
18:41 calavera joined #gluster
18:43 cyberswat joined #gluster
18:51 shyam joined #gluster
18:52 cyberswat joined #gluster
18:54 jdossey joined #gluster
18:59 maveric_amitc_ joined #gluster
19:06 rotbeard joined #gluster
19:46 victori joined #gluster
19:54 jonb joined #gluster
19:55 jonb joined #gluster
19:57 pppp joined #gluster
20:01 calisto joined #gluster
20:01 jonb Hello, I have a replicated volume that has one of the nodes down and bringing it back up is causing severe performance problems. Can I sync files from the online brick to the offline brick via rsync to lessen the load on the self-heal when I bring the other node back online?
20:02 SOLDIERz joined #gluster
20:03 togdon joined #gluster
20:06 calavera joined #gluster
20:08 deniszh joined #gluster
20:09 maveric_amitc_ joined #gluster
20:09 jcastill1 joined #gluster
20:14 jcastillo joined #gluster
20:19 PatNarciso so, why would gluster delete files?  today I've witnessed 3 files get unlinked, that existed yesterday.  brick log shows I(nfo)s for posix.c open-fd-key-status: 0, linkto_xattr status: 0
20:29 cyberswat joined #gluster
20:29 PatNarciso noteworthy, but doubt it was cause an issue... an xfs defrag was in progress at the time.
20:31 JoeJulian If a file disappears from a brick, it will not be deleted by gluster. There are three ways for a file to be deleted. One is through a client. The other is to delete the file and the gfid hardlink from both replica. The last is to delete a file from the volume while one brick is offline, bring the last brick down, bring the other brick up, delete the same file, re-add the file, bring the first brick back up (I think).
20:32 JoeJulian Generally speaking, though, the only way is for the file to be deleted via a client.
20:32 jdossey joined #gluster
20:38 deniszh1 joined #gluster
20:49 PatNarciso yah, but client unlinks don't generate this info message in the brick log.
20:50 PatNarciso this occurred on a single brick volume.  no split-brain arguments.
20:57 JoeJulian Well that message is in the posix unlink function call in the client translator. "open-fd-key-status: 0" means the file was not open when the attempt was made to delete the file.
20:58 JoeJulian linkto_xattr status: 0 means that the file, a dht link file, was not busy and was able to be deleted.
20:59 JoeJulian A dht linkfile is a 0 length file that's mode 0x1000 and has the extended attribute glusterfs.dht.linkto set, pointing to the brick where the file actually resides.
21:01 JoeJulian So... since you have a single brick volume, how could you have dht link files?
21:01 JoeJulian Was this previously a multi-brick volume?
21:02 JoeJulian Were the files (and their xattrs) previously part of a  multi-brick volume?
21:02 PatNarciso more like the files (and their xattrs) were previously part of a multi-brick volume.
21:02 JoeJulian There's why. Since the dht-linkto is no longer valid, it was removed. The *actual* file must have been on the *other* brick.
21:03 PatNarciso ho-le-fuck
21:04 PatNarciso trusted.gfid and trusted.glusterfs.volume-id values were cleared... is there another value I can seek to identify these dht link files?
21:05 * PatNarciso performs gluster v stop... I got a feeling the weekend is just getting started.
21:10 calavera joined #gluster
21:11 JoeJulian find -mode 1000
21:12 JoeJulian Or even better
21:12 JoeJulian find -mode 1000 -size 0
21:13 georgeh-LT2 joined #gluster
21:16 PatNarciso k... only 1037 files...
21:18 PatNarciso and most are ._ files or premier preview files...  ok, not too too bad.
21:47 calavera_ joined #gluster
21:50 calavera joined #gluster
21:57 jdossey joined #gluster
22:16 calavera_ joined #gluster
22:17 calavera joined #gluster
22:33 victori joined #gluster
22:40 victori joined #gluster
22:44 cleong joined #gluster
22:48 cholcombe joined #gluster
23:01 theron_ joined #gluster
23:01 calavera joined #gluster
23:03 victori joined #gluster
23:12 dgandhi joined #gluster
23:12 cleong joined #gluster
23:12 calavera joined #gluster
23:14 victori joined #gluster
23:17 badone joined #gluster
23:20 cleong joined #gluster
23:23 badone_ joined #gluster
23:27 victori joined #gluster
23:35 badone__ joined #gluster
23:36 victori joined #gluster
23:47 badone joined #gluster
23:51 ninkotech__ joined #gluster
23:51 Pintomatic_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary