Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 necrogami joined #gluster
00:55 ira joined #gluster
00:56 ahino joined #gluster
01:14 farhoriz_ joined #gluster
01:19 hagarth joined #gluster
01:19 shdeng joined #gluster
01:32 farhorizon joined #gluster
01:38 derjohn_mobi joined #gluster
01:41 Lee1092 joined #gluster
01:51 kramdoss_ joined #gluster
02:02 B21956 joined #gluster
02:11 shdeng joined #gluster
02:27 shdeng joined #gluster
02:33 poornimag joined #gluster
02:38 RameshN joined #gluster
03:03 magrawal joined #gluster
03:11 RameshN joined #gluster
03:13 rideh joined #gluster
03:44 kukulogy joined #gluster
03:45 RameshN joined #gluster
03:49 nbalacha joined #gluster
03:53 atinm joined #gluster
04:04 kukulogy question: I'm using striped-replicate volume. I tried to fill the mount. I check the volume status. http://dpaste.com/20T6YYV I wonder why the gluster04 have a low diskspace compared to the other server
04:04 glusterbot Title: dpaste: 20T6YYV (at dpaste.com)
04:06 atinm joined #gluster
04:22 raghug joined #gluster
04:25 ramky joined #gluster
04:36 shubhendu joined #gluster
04:36 jiffin joined #gluster
04:39 sanoj joined #gluster
04:44 knightsamar joined #gluster
04:44 nehar joined #gluster
04:49 poornimag joined #gluster
04:58 kotreshhr joined #gluster
04:58 gem joined #gluster
05:01 ppai joined #gluster
05:10 ndarshan joined #gluster
05:16 RameshN joined #gluster
05:22 Bhaskarakiran joined #gluster
05:24 sakshi joined #gluster
05:34 F2Knight_ joined #gluster
05:37 satya4ever joined #gluster
05:40 hchiramm joined #gluster
05:41 aspandey joined #gluster
05:42 karthik_ joined #gluster
05:46 prasanth joined #gluster
05:47 nishanth joined #gluster
05:48 kovshenin joined #gluster
05:52 devyani7_ joined #gluster
05:54 prasanth joined #gluster
05:56 nishanth joined #gluster
06:02 hgowtham joined #gluster
06:02 kukulogy How can I rebalance striped replicate?
06:03 ppai joined #gluster
06:06 MikeLupe joined #gluster
06:08 jiffin kukulogy: striped volumes are not supported any more, you can try out sharded volume
06:09 [diablo] joined #gluster
06:10 cliluw joined #gluster
06:12 kukulogy thanks jiffin. I have my eye on striped replicate atm. Do you have an idea why my brick: gluster04 have low diskspace compared to my other brick?  http://dpaste.com/20T6YYV
06:12 glusterbot Title: dpaste: 20T6YYV (at dpaste.com)
06:13 jiffin kukulogy: Nope
06:13 kukulogy from what I understand stripe replicate will balance the files throughout the servers.
06:14 kukulogy jiffin: I can't see sharded volume in the docs
06:15 R0ok_ joined #gluster
06:15 nehar joined #gluster
06:16 msvbhat joined #gluster
06:17 jiffin kukulogy: http://blog.gluster.org/2015/12​/introducing-shard-translator/
06:23 mhulsman joined #gluster
06:26 anil_ joined #gluster
06:29 karnan joined #gluster
06:30 prasanth joined #gluster
06:31 rastar joined #gluster
06:32 devyani7 joined #gluster
06:33 kdhananjay joined #gluster
06:36 rafi joined #gluster
06:37 rastar joined #gluster
06:38 kukulogy jiffin: thank you. Btw, what volume are you using?
06:38 ashiq joined #gluster
06:40 jiffin kukulogy: I don't use any specific volumes, I am just a developer in gluster
06:43 post-factum jiffin: real gluster developers do not use gluster ;)
06:44 jiffin post-factum: :D
06:49 Saravanakmr joined #gluster
06:52 Peppard joined #gluster
06:53 mhulsman1 joined #gluster
07:02 mhulsman joined #gluster
07:08 kdhananjay1 joined #gluster
07:13 raghug joined #gluster
07:15 prasanth joined #gluster
07:18 fsimonce joined #gluster
07:19 kdhananjay joined #gluster
07:21 fcoelho joined #gluster
07:23 mhulsman1 joined #gluster
07:28 pur joined #gluster
07:33 harish_ joined #gluster
07:38 [Enrico] joined #gluster
07:40 om joined #gluster
07:44 hackman joined #gluster
07:50 mhulsman joined #gluster
08:04 ivan_rossi joined #gluster
08:04 ivan_rossi left #gluster
08:05 hybrid512 joined #gluster
08:07 Philambdo joined #gluster
08:07 ws2k3 joined #gluster
08:09 natarej__ joined #gluster
08:10 somlin22 joined #gluster
08:11 glusterbot` joined #gluster
08:11 yosafbridge joined #gluster
08:12 bio__ joined #gluster
08:12 kenansulayman joined #gluster
08:12 tru_tru joined #gluster
08:12 zerick_ joined #gluster
08:13 fcoelho joined #gluster
08:13 aspandey joined #gluster
08:13 rastar joined #gluster
08:13 abyss^ joined #gluster
08:14 sysanthrope joined #gluster
08:14 squeakyneb joined #gluster
08:14 rastar joined #gluster
08:14 DJClean joined #gluster
08:15 rastar joined #gluster
08:16 mhulsman joined #gluster
08:20 Slashman joined #gluster
08:23 armyriad joined #gluster
08:26 hagarth joined #gluster
08:29 raghug joined #gluster
08:31 robb_nl joined #gluster
08:32 armyriad joined #gluster
08:32 mhulsman joined #gluster
08:33 aravindavk joined #gluster
08:34 poornimag joined #gluster
08:37 purpleidea joined #gluster
08:37 purpleidea joined #gluster
08:38 derjohn_mobi joined #gluster
08:39 Philambdo joined #gluster
08:41 somlin22 joined #gluster
08:42 RameshN joined #gluster
08:43 [Enrico] joined #gluster
08:46 Philambdo joined #gluster
08:47 armyriad joined #gluster
08:49 md2k joined #gluster
08:55 somlin22 joined #gluster
09:01 arcolife joined #gluster
09:01 Philambdo joined #gluster
09:01 armyriad joined #gluster
09:05 Seth_Karlo joined #gluster
09:12 armyriad joined #gluster
09:17 shdeng joined #gluster
09:21 armyriad joined #gluster
09:26 Wizek joined #gluster
09:29 skoduri joined #gluster
09:30 JesperA joined #gluster
09:31 skoduri post-factum, ping
09:31 glusterbot skoduri: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
09:32 post-factum skoduri: pong
09:32 Klas post-factum: haha, you don't respect the bot then ;)?
09:32 post-factum Klas: i respect skoduri more than bot
09:32 skoduri :D
09:33 skoduri post-factum, sorry I hadn't got  chance to re-test your workload..I am willing totry now
09:33 skoduri post-factum, wanted to verify the steps with you once
09:33 post-factum skoduri: ah, okay
09:33 skoduri trying on latest 3.7.13 build
09:34 skoduri so I have created a 5*2  volume
09:34 skoduri mounted it using FUSE
09:34 bluenemo joined #gluster
09:34 post-factum aye
09:34 skoduri on one of the nodes will run " while true; do nmap -Pn -p49152-49156 127.0.0.1; done" 49152-49156 are the brick ports
09:35 skoduri and on the mount point -
09:35 skoduri index=0; while true; do hash=$(echo $index | sha1sum); p1=$(echo $hash | cut -c 1-2); p2=$(echo $hash | cut -c 3-4); sudo mkdir -p $p1/$p2; sudo touch $p1/$p2/$hash; ((index++)); done
09:35 glusterbot skoduri: ((index's karma is now 1
09:35 somlin22 joined #gluster
09:35 skoduri right?
09:35 post-factum lets clarify your setup
09:35 nbalacha joined #gluster
09:35 post-factum do you have 2 nodes, and one node is a client as well?
09:36 skoduri Its 3-node setup actually...and I am using third node (which doesn't contain any bricks) as client
09:36 skoduri volume is created using bricks from first two nodes
09:36 post-factum anyway, i guess, that does not matter. okay
09:36 post-factum then, on 3rd node you generate workload
09:37 skoduri yupp..I am about to
09:37 nehar joined #gluster
09:37 post-factum i run several types of workload simultaneously
09:37 skoduri started
09:37 skoduri oh okay like?
09:37 post-factum also did stat with find and rm -rf with some sleep
09:37 post-factum all in parallel
09:37 skoduri oh right..I remember
09:38 post-factum time to crash, i guess, depends on your hardware and network connectivity. i guess this issue is hard to trigger with low load
09:39 skoduri oh...I started all those 3 tests in parallel
09:39 post-factum should be ok then
09:40 skoduri I haven't applied any of those additional patches which were under review..its just 3.7.13 build
09:40 skoduri and for now not issuing "gluster v status" cmd aswell
09:40 post-factum remember I tried to revert them with no luck
09:40 post-factum status is separate issue, i guess
09:40 skoduri right..thats why I guess I should be able to see the issue without those patches as well
09:41 skoduri yes..I want to get to inode cleanup crash first
09:44 post-factum besides crash because of sigsegv you may get some bricks stuck in 100% cpu usage
09:44 somlin22 joined #gluster
09:44 post-factum this behavior is more likely with default volume options
09:45 post-factum brick remain responsive, but eats 1 core
09:45 post-factum or it may stuck at 0 cpu usage and become unresponsive
09:46 harish_ joined #gluster
09:47 JesperA joined #gluster
09:51 somlin22 joined #gluster
09:52 msvbhat joined #gluster
10:01 archit_ joined #gluster
10:04 skoduri post-factum, okay will check on that as well
10:12 arif-ali joined #gluster
10:12 msvbhat joined #gluster
10:15 somlin22 joined #gluster
10:16 ashiq_ joined #gluster
10:23 somlin22 joined #gluster
10:41 hgowtham joined #gluster
10:42 bfoster joined #gluster
10:43 armyriad joined #gluster
10:49 somlin22 joined #gluster
11:04 ramky joined #gluster
11:04 purpleidea joined #gluster
11:04 purpleidea joined #gluster
11:14 armyriad joined #gluster
11:17 hackman joined #gluster
11:18 cloph_away joined #gluster
11:33 JesperA joined #gluster
11:44 somlin22 joined #gluster
11:44 kovshenin joined #gluster
11:59 julim joined #gluster
12:00 Bhaskarakiran joined #gluster
12:05 somlin22 joined #gluster
12:13 somlin22 joined #gluster
12:20 side_control joined #gluster
12:22 side_control joined #gluster
12:35 unclemarc joined #gluster
12:42 kotreshhr left #gluster
12:49 skoduri post-factum, I haven't seen any issue yet on my setup..I will leave the tests for a day or so and check over the weekend
12:49 post-factum skoduri: okay
12:50 hanwei_ joined #gluster
12:55 rwheeler joined #gluster
13:05 crashmag joined #gluster
13:27 julim joined #gluster
13:31 squizzi joined #gluster
13:31 hanwei_ joined #gluster
13:32 hanwei_ joined #gluster
13:32 hanwei_ joined #gluster
13:33 skylar joined #gluster
13:33 hanwei_ joined #gluster
13:34 hanwei_ joined #gluster
13:36 hanwei_ joined #gluster
13:41 somlin22 joined #gluster
13:41 shaunm joined #gluster
13:53 Guest17899 joined #gluster
14:02 necrogami joined #gluster
14:03 nbalacha joined #gluster
14:06 aphorise joined #gluster
14:07 nehar joined #gluster
14:09 Che-Anarch joined #gluster
14:19 alvinstarr joined #gluster
14:22 F2Knight_ joined #gluster
14:24 armyriad joined #gluster
14:26 hwcomcn joined #gluster
14:27 hwcomcn joined #gluster
14:27 hwcomcn joined #gluster
14:28 hwcomcn joined #gluster
14:29 derjohn_mobi joined #gluster
14:31 hagarth joined #gluster
14:32 hwcomcn joined #gluster
14:33 farhorizon joined #gluster
14:33 hwcomcn joined #gluster
14:35 hwcomcn joined #gluster
14:36 farhorizon joined #gluster
14:36 johnmilton joined #gluster
14:39 Wizek joined #gluster
14:42 johnmilton joined #gluster
14:50 _md2k_ joined #gluster
14:54 sandersr joined #gluster
14:54 md2k joined #gluster
15:00 farhoriz_ joined #gluster
15:01 _md2k_ joined #gluster
15:10 wushudoin joined #gluster
15:14 bowhunter joined #gluster
15:15 B21956 joined #gluster
15:28 jvandewege_ joined #gluster
15:29 bio__ joined #gluster
15:29 sandersr joined #gluster
15:29 thatgraemeguy joined #gluster
15:29 snila_ joined #gluster
15:30 natgeorg joined #gluster
15:31 ira joined #gluster
15:31 lalatend1M joined #gluster
15:32 dataio_ joined #gluster
15:33 hchiramm joined #gluster
15:35 alvinstarr I am seeing periodic process hangs that can only be cleared by forcibly unmounting the gluster fs. The processes appear to be hanging on a PHP call to file_put_contents.  I found a link to a problem in 2010 but thats about it. Anybody have any suggestions?
15:37 scuttle` joined #gluster
15:38 bwerthma1n joined #gluster
15:39 side_control joined #gluster
15:42 scubacuda joined #gluster
15:42 wadeholler joined #gluster
15:47 Mmike joined #gluster
15:47 decay joined #gluster
15:47 Klas joined #gluster
15:47 gbox joined #gluster
15:47 atrius joined #gluster
15:47 foster joined #gluster
15:47 jesk joined #gluster
15:47 Seth_Karlo joined #gluster
15:47 JoeJulian joined #gluster
15:47 m0zes joined #gluster
15:49 Lee1092 joined #gluster
15:49 devyani7 joined #gluster
15:50 [o__o] joined #gluster
15:50 tyler274 joined #gluster
15:59 wadeholler joined #gluster
16:11 somlin22 joined #gluster
16:12 farhorizon joined #gluster
16:20 jwd joined #gluster
16:22 somlin22 joined #gluster
16:22 farhorizon joined #gluster
16:23 farhorizon joined #gluster
16:24 shubhendu joined #gluster
16:27 farhorizon joined #gluster
16:27 shubhendu joined #gluster
16:31 farhorizon joined #gluster
16:32 farhorizon joined #gluster
16:33 farhorizon joined #gluster
16:34 farhorizon joined #gluster
16:36 hchiramm joined #gluster
16:36 farhorizon joined #gluster
16:40 shaunm joined #gluster
16:44 Mmike joined #gluster
16:46 farhorizon joined #gluster
16:56 hackman joined #gluster
16:58 karnan joined #gluster
17:13 kovshenin joined #gluster
17:17 F2Knight_ joined #gluster
17:31 skoduri joined #gluster
17:36 hchiramm joined #gluster
17:37 julim joined #gluster
17:39 johnmilton joined #gluster
17:59 glustin joined #gluster
18:22 hagarth joined #gluster
18:41 kovshenin joined #gluster
18:49 shyam joined #gluster
18:53 om joined #gluster
18:56 ben453 joined #gluster
19:01 F2Knight_ joined #gluster
19:23 mhulsman joined #gluster
19:34 hagarth joined #gluster
19:37 Seth_Karlo joined #gluster
19:40 Seth_Kar_ joined #gluster
20:21 farhoriz_ joined #gluster
20:25 F2Knight_ joined #gluster
20:26 mahdi joined #gluster
20:26 mahdi Hi, can anyone help with gluster nfs crash ?
20:27 mahdi im getting crashes with nfs and i lose connectivity to the storage, but after restarting glusterd it work fine for few hours and crash again
20:29 farhorizon joined #gluster
20:33 ira joined #gluster
20:42 shyam left #gluster
20:52 md2k joined #gluster
21:14 uebera|| joined #gluster
21:14 uebera|| joined #gluster
21:26 jesk protip: be more verbose
21:28 JoeJulian and patient
21:34 side_control is there anyway to check the progres of splitbrain recovery, besides gluster volume heal $VOL info ?
21:38 F2Knigh__ joined #gluster
21:44 hagarth joined #gluster
21:51 F2Knight_ joined #gluster
21:53 amye joined #gluster
22:06 amye joined #gluster
22:11 amye_ joined #gluster
22:13 JoeJulian side_control: You can get a state dump and look for self-heal locks. It scans through sequentially so if you see what byte the lock is at and know how big your file is, you can get a reasonable estimate.
22:14 side_control JoeJulian: url/doc on how to do this?
22:14 side_control or keywords to be searching for?
22:15 JoeJulian gluster help | grep statedump
22:15 side_control JoeJulian: thanks
22:17 JoeJulian The dump will be in /var/run/gluster. Just less the file and /self-heal
22:19 F2Knight_ joined #gluster
22:19 side_control cool thank you
22:25 F2Knight joined #gluster
22:32 bkolden joined #gluster
23:29 Wizek joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary