Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Caveat4U joined #gluster
00:04 Caveat4U joined #gluster
00:10 JoeJulian xrandr: :)
00:18 ChrisHolcombe has anyone tried using gfapi with gluster on travis ci?
00:18 nathwill joined #gluster
00:21 jbrooks joined #gluster
00:24 nathwill joined #gluster
00:46 bowhunter joined #gluster
01:00 mowntan joined #gluster
01:01 JoeJulian There are CI tests against gfapi.
01:05 jbrooks joined #gluster
01:08 gyadav joined #gluster
01:11 jbrooks joined #gluster
01:15 shdeng joined #gluster
01:29 kramdoss_ joined #gluster
01:35 kettlewell joined #gluster
01:37 zatabot Anyone have any advice how to config gluster to demote faster from a hot tier? I have a bit of a network bottleneck to the cold tier, which certainly slows demotion down, but on regular intervals the demotion process pauses for a few seconds. I'm wondering if there is some way to eliminate this.
01:49 caitnop joined #gluster
01:58 bbooth joined #gluster
02:00 Gambit15 joined #gluster
02:18 gem joined #gluster
02:28 loadtheacc joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 derjohn_mob joined #gluster
02:49 ppai joined #gluster
03:08 jbrooks joined #gluster
03:16 Caveat4U joined #gluster
03:26 atinm_ joined #gluster
03:33 magrawal joined #gluster
03:37 XpineX joined #gluster
03:41 kramdoss_ joined #gluster
03:41 kramdoss_ afk
03:44 msvbhat joined #gluster
03:49 Shu6h3ndu joined #gluster
03:53 rafi joined #gluster
03:57 shyam joined #gluster
04:05 Caveat4U joined #gluster
04:05 XpineX joined #gluster
04:13 Karan joined #gluster
04:19 jiffin joined #gluster
04:25 sanoj joined #gluster
04:25 nbalacha joined #gluster
04:30 itisravi joined #gluster
04:45 gyadav joined #gluster
04:55 ppai joined #gluster
05:07 RameshN joined #gluster
05:08 sbulage joined #gluster
05:11 gyadav_ joined #gluster
05:13 ndarshan joined #gluster
05:15 karthik_us joined #gluster
05:19 rafi joined #gluster
05:19 nobody481 joined #gluster
05:21 ankitraj joined #gluster
05:21 bbooth joined #gluster
05:24 hgowtham joined #gluster
05:26 Prasad joined #gluster
05:27 icey joined #gluster
05:27 icey joined #gluster
05:35 nishanth joined #gluster
05:36 rafi1 joined #gluster
05:42 riyas joined #gluster
05:44 sona joined #gluster
05:45 msvbhat joined #gluster
05:51 bbooth joined #gluster
05:52 skoduri joined #gluster
05:54 ankitraj joined #gluster
05:54 rafi1 joined #gluster
06:01 [diablo] joined #gluster
06:02 aravindavk joined #gluster
06:03 jkroon joined #gluster
06:07 asriram joined #gluster
06:08 AnkitRaj_ joined #gluster
06:09 apandey joined #gluster
06:11 ankitraj joined #gluster
06:11 gyadav joined #gluster
06:15 kotreshhr joined #gluster
06:17 Karan joined #gluster
06:17 poornima joined #gluster
06:20 apandey joined #gluster
06:20 susant joined #gluster
06:33 Caveat4U joined #gluster
06:39 ashiq_ joined #gluster
06:43 sbulage joined #gluster
06:44 msvbhat joined #gluster
06:55 mhulsman joined #gluster
07:03 k4n0 joined #gluster
07:18 derjohn_mob joined #gluster
07:18 Humble joined #gluster
07:19 jtux joined #gluster
07:23 rastar joined #gluster
07:27 nishanth joined #gluster
07:28 derjohn_mob joined #gluster
07:29 Intensity joined #gluster
07:30 sbulage joined #gluster
07:50 freepe_ joined #gluster
07:56 logan- joined #gluster
08:03 shyam joined #gluster
08:07 Philambdo1 joined #gluster
08:16 ivan_rossi joined #gluster
08:17 pulli joined #gluster
08:27 jri joined #gluster
08:32 mhulsman joined #gluster
08:35 Lee1092 joined #gluster
08:35 pulli joined #gluster
08:36 bbooth joined #gluster
08:49 alezzandro joined #gluster
08:54 Saravanakmr joined #gluster
08:56 fsimonce joined #gluster
08:58 k4n0 joined #gluster
08:59 Prasad joined #gluster
08:59 hgowtham joined #gluster
09:01 abyss^ JoeJulian: you migrating glusterfs to ceph? I can't see any new entries on your blog;p
09:08 shyam joined #gluster
09:14 Philambdo joined #gluster
09:31 msvbhat joined #gluster
09:37 bbooth joined #gluster
09:37 aravindavk joined #gluster
09:42 pulli joined #gluster
09:47 karthik_us joined #gluster
09:56 rafi joined #gluster
09:57 kotreshhr joined #gluster
10:03 mhulsman1 joined #gluster
10:09 flying joined #gluster
10:11 mhulsman joined #gluster
10:12 mhulsman1 joined #gluster
10:15 squizzi joined #gluster
10:15 squizzi joined #gluster
10:34 mhulsman joined #gluster
10:35 mhulsman1 joined #gluster
10:36 rafi joined #gluster
10:39 bbooth joined #gluster
10:39 rafi1 joined #gluster
10:40 aravindavk_ joined #gluster
10:45 phileas joined #gluster
10:53 msvbhat joined #gluster
10:54 kotreshhr joined #gluster
10:55 karthik_us joined #gluster
10:57 mhulsman joined #gluster
11:12 mhulsman joined #gluster
11:15 hgowtham joined #gluster
11:27 legreffier joined #gluster
11:28 dspisla joined #gluster
11:29 dspisla Hello, I can not start gluster daemon when I stopped and reboot my ec2 instance with CentOS. Does anybody has some experience with this issue?
11:31 jiffin dspisla: you are sure glusterd is not running?
11:32 dspisla yes
11:33 dspisla I checked it
11:33 Philambdo joined #gluster
11:35 dspisla @jiffin yes
11:35 Philambdo joined #gluster
11:39 k4n0 joined #gluster
11:40 bbooth joined #gluster
11:40 jiffin dspisla: which gluster version are u running
11:40 jiffin ?
11:49 Philambdo1 joined #gluster
11:50 Philambdo joined #gluster
12:00 pmisiak joined #gluster
12:01 pmisiak hi guys, I have a question for an experts :)
12:02 pmisiak how exactly gluster compute brick on which file is stored?
12:02 pmisiak I have a hash computed from file name and directory GFID
12:03 pmisiak and how gluster knows directory layout? which brick hold which hash space?
12:03 pmisiak i know it is stored in trusted.glusterfs.dht
12:03 jdossey joined #gluster
12:04 pmisiak but does it mean that gluster have to read this xattr from every brick to build hash ring?
12:06 Philambdo joined #gluster
12:14 dspisla joined #gluster
12:14 dspisla @jiffin I am running glusterfs 3.8.5
12:16 Slashman joined #gluster
12:17 derjohn_mob joined #gluster
12:31 kettlewell joined #gluster
12:41 bbooth joined #gluster
12:50 rastar joined #gluster
12:52 mhulsman joined #gluster
12:55 jiffin dspisla:please check glusterd log ( /var/log/glusterfs/etc-glusterfs-glusterd.vol.log) for clues
12:55 jiffin dspisla: ^
13:01 dspisla @jiffin Thank you. I checked this log file. The messages leads me to the solution. I just remove /var/lib/glusterd and now the daemon ist working :-)
13:09 shutupsquare joined #gluster
13:13 jiffin dspisla: omg you lost all the gluster volume u have created
13:15 dspisla @jiffin thats ok. it was just for testing issues
13:16 jiffin dspisla: okay
13:17 mlhess- joined #gluster
13:18 shortdudey123 joined #gluster
13:18 arpu joined #gluster
13:21 Wizek_ joined #gluster
13:23 dspisla exit
13:26 susant left #gluster
13:28 B21956 joined #gluster
13:32 hgowtham joined #gluster
13:38 B21956 joined #gluster
13:42 bbooth joined #gluster
13:47 shutupsquare Hi looking for help with an error im getting "[2017-01-12 13:19:37.326562] W [fuse-bridge.c:2167:fuse_writev_cbk] 0-glusterfs-fuse: 5852034: WRITE => -1 (Input/output error)" and I dont really know what action caused it, I get lots of these logged how can I start a diagnostic?
13:48 ira joined #gluster
13:53 rwheeler joined #gluster
13:53 plarsen joined #gluster
13:54 nobody481 joined #gluster
13:59 rastar joined #gluster
14:02 shyam joined #gluster
14:02 jiffin1 joined #gluster
14:27 ankitraj joined #gluster
14:33 nbalacha joined #gluster
14:33 saybeano Hello, from `peer probe status` I am seeing all are connected, however peer probe all is reporting "peer probe: failed: Probe returned with Transport endpoint is not connected" - is this a familiar issue to anyone? Cheers.
14:37 skylar joined #gluster
14:43 bbooth joined #gluster
14:50 shaunm joined #gluster
15:01 kotreshhr left #gluster
15:02 susant joined #gluster
15:02 susant left #gluster
15:20 Gambit15 joined #gluster
15:23 shyam joined #gluster
15:23 shyam left #gluster
15:25 nbalacha joined #gluster
15:31 aronnax joined #gluster
15:38 alvinstarr joined #gluster
15:39 jiffin joined #gluster
15:43 bbooth joined #gluster
15:43 ndarshan joined #gluster
15:44 farhorizon joined #gluster
15:45 farhoriz_ joined #gluster
15:48 jiffin1 joined #gluster
15:50 renout_away joined #gluster
15:51 JoeJulian abyss^: lol, nope. Never will either. ;)
15:53 riyas joined #gluster
15:56 h4xr joined #gluster
15:56 h4xr left #gluster
16:06 msvbhat joined #gluster
16:06 snehring joined #gluster
16:07 XpineX joined #gluster
16:09 bbooth joined #gluster
16:14 Slashman joined #gluster
16:17 Caveat4U joined #gluster
16:24 kramdoss_ joined #gluster
16:26 farhorizon joined #gluster
16:33 farhoriz_ joined #gluster
16:36 msvbhat joined #gluster
16:47 Gambit15 Hey guys, I'm trying to find out the specifics of how geo-rep works
16:47 RameshN joined #gluster
16:48 Gambit15 What's the difference between Gluster's geo-rep & running a cronjob with rsync?
16:50 bowhunter joined #gluster
16:58 om2 joined #gluster
17:01 Karan joined #gluster
17:03 derjohn_mob joined #gluster
17:10 jarbod_ joined #gluster
17:16 prasanth joined #gluster
17:28 apandey joined #gluster
17:31 niknakpaddywak joined #gluster
17:32 ivan_rossi left #gluster
17:33 jdossey joined #gluster
17:35 farhorizon joined #gluster
17:40 JoeJulian Gambit15: Georep will only compare and sync the files that have actually changed. an rsync cronjob has to walk the whole tree.
17:49 bbooth joined #gluster
17:56 Gambit15 lsyncd is an rsync wrapper that monitors for inotify & fsevents hooks in the underlying FS
17:58 JoeJulian Yes, you could run lsyncd on all your servers and come up with some way of preventing them from interfering with each other.
17:59 Gambit15 I'm only asking as I need to setup replication to my FreeBSD based backup box, but I see geo-rep for the gluster port isn't working yet
17:59 JoeJulian You couldn't run it on your clients. FUSE has no inotify
17:59 Gambit15 fsevents?
17:59 JoeJulian not sure
17:59 Gambit15 The other option I'm considering is using a Linux based VM to mount the backup volume over NFS
18:00 Gambit15 ...installing the gluster client on that
18:00 Gambit15 bit ugly though
18:00 JoeJulian If you're going to do that, you could just run georep in the linux vm.
18:01 Gambit15 Exactly. The VM would be used as a proxy to the BSD storage
18:01 Gambit15 Not a very pretty solution though...
18:06 JoeJulian Pretty would be getting any necessary patches in to make it work in bsd.
18:06 JoeJulian I'm really not sure why it wouldn't.
18:07 Gambit15 By the way, whilst you're here, can you tell me if there's a way to get more details about healing processes?
18:08 Gambit15 Heal duration & bits copied, for example
18:09 JoeJulian Not really in any useful way. You can glean where in a file the shd is by looking at state dumps and finding the lock range that it's currently working on.
18:11 Gambit15 Hmm :/ Any idea if there's anything in the roadmap to improve this?
18:12 JoeJulian We talked about it at the gluster developer summit. I don't know if there's actually any bugzilla entries for it though. Improvements are intended though.
18:12 JoeJulian I should really re-read my posts before I hit enter so I'm not so redundant.
18:12 Gambit15 ?
18:14 JoeJulian "though" I say it way too much.
18:16 Gambit15 Ah, likewise
18:16 Gambit15 Any cheers for the info
18:19 zatabot joined #gluster
18:23 ChrisHolcombe is the gluster rest api ready?  I'm having trouble figuring out what the status is of it
18:23 JoeJulian I thought that was relegated to glusterd2?
18:24 zatabot Quick question, does anyone know if cluster tiering config requires a restart to go into affect (via `gluster vol set <volname> <conf>.<conf> <value>`)? I made changes to watermark settings without restarting and it seems to have changed nothing.
18:25 ChrisHolcombe JoeJulian, it seems like it is
18:27 vbellur joined #gluster
18:28 JoeJulian zatabot: My understanding was that it does not, but I haven't tested that myself.
18:32 apandey joined #gluster
18:44 federicoaguirre joined #gluster
18:44 federicoaguirre Hi there, I have a cluster with 2 nodes in replica....
18:44 federicoaguirre both are online but one of them is unsynced.!
18:44 federicoaguirre any help?
18:45 Gambit15 So split-brain then?
18:46 federicoaguirre I thought that.!
18:46 jbrooks joined #gluster
18:46 federicoaguirre but
18:46 federicoaguirre Brick prd-zetech-glusterfs-01:/var/data/shared
18:46 federicoaguirre Number of entries: 0
18:46 federicoaguirre Brick prd-zetech-glusterfs-02:/var/data/shared
18:46 federicoaguirre Number of entries: 0
18:46 federicoaguirre this is the output for: sudo gluster volume heal storage info split-brain
18:46 Gambit15 Not here!
18:46 Gambit15 Use fpaste.org or somewhere
18:47 Gambit15 ah...
18:47 Gambit15 So why do you think they're unsynced?
18:47 federicoaguirre I don't know.!
18:47 federicoaguirre sorry for the paste.!
18:48 Gambit15 "both are online but one of them is unsynced.!"
18:48 Gambit15 Why do you think they're unsynced?
18:48 federicoaguirre If I run "sudo gluster volume heal storage info" (on both nodes)... I get a long list of entries
18:48 federicoaguirre 1271 on both nodes.!
18:48 msvbhat joined #gluster
18:48 federicoaguirre Number of entries: 1271
18:49 federicoaguirre because some files I have in the firts "node_!" they are not in the second.!
18:50 Gambit15 Go through the logs & see if you can find any signs of activity
18:50 federicoaguirre http://paste.fedoraproject.org/526349/84247044/
18:50 glusterbot Title: #526349 • Fedora Project Pastebin (at paste.fedoraproject.org)
18:51 Gambit15 I've just been enquiring about the exact same thing, gluster's very terse info about the healing process
18:52 federicoaguirre :(
18:52 Gambit15 JoeJulian suggested the following: "You can glean where in a file the shd is by looking at state dumps and finding the lock range that it's currently working on."
18:54 federicoaguirre in etc-glusterfs-glusterd.vol.log I can see => "Commit failed on prd-zetech-glusterfs-01. Please check log file for details"
18:55 federicoaguirre in glfsheal-storage.log => Server and Client lk-version numbers are not same, reopening the fds
18:55 glusterbot federicoaguirre: This is normal behavior and can safely be ignored.
18:55 Gambit15 Yeah, afraid I can't advise any further, I'd just trawl all of the logs & filter for errors & warnings
18:55 federicoaguirre nothing else that signs any error.!
18:56 Gambit15 Checked to see if there's any traffic going over the interfaces?
18:57 Gambit15 The best method I've come up with so far is just monitoring each node's storage interface for spikes
18:57 Gambit15 Other than that, just give it time
18:58 Gambit15 I've had a few times where my bricks have mysteriously come unsynced & it took a day to repair itself
18:59 Gambit15 Up all night trying to analyse & fix the problem, and then woke up to find it'd fixed itself
19:00 farhoriz_ joined #gluster
19:01 prasanth joined #gluster
19:02 side_control joined #gluster
19:03 federicoaguirre yep, it's good... I'll try it
19:04 federicoaguirre another errro I could see is: 0-transport: disconnecting now
19:05 alvinstarr joined #gluster
19:06 bowhunter joined #gluster
19:06 JoeJulian federicoaguirre: Does that follow a timeout?
19:07 federicoaguirre ¿?
19:07 federicoaguirre sorry?
19:07 federicoaguirre if this issue could be related to a timeout?
19:09 JoeJulian "0-transport: disconnecting now" can follow a ping timeout. If it does, it may be resource starvation or network trouble.
19:10 federicoaguirre got it.! no, I run a infinite ping while errors appear... It doesn't shows me a time out...
19:11 purpleidea joined #gluster
19:11 purpleidea joined #gluster
19:11 federicoaguirre both servers are in AWS on a GB network
19:14 federicoaguirre http://paste.fedoraproject.org/526358/84248434/
19:14 glusterbot Title: #526358 • Fedora Project Pastebin (at paste.fedoraproject.org)
19:14 federicoaguirre everything seems to be OK.!
19:15 vbellur joined #gluster
19:15 federicoaguirre http://paste.fedoraproject.org/526360/24852214/
19:15 glusterbot Title: #526360 • Fedora Project Pastebin (at paste.fedoraproject.org)
19:22 jwd joined #gluster
19:22 niknakpaddywak joined #gluster
19:22 shutupsquare joined #gluster
19:29 ankitraj joined #gluster
19:34 jdossey joined #gluster
19:41 arpu joined #gluster
19:47 federicoaguirre How can I re-build a brick ?
19:49 AnkitRaj_ joined #gluster
19:50 msvbhat joined #gluster
19:51 vbellur joined #gluster
19:54 annettec joined #gluster
20:01 jbrooks joined #gluster
20:05 rwheeler joined #gluster
20:07 JoeJulian federicoaguirre: The way I do it is to kill glusterfsd for that brick, unmount it, format it, mount it, then "gluster volume start $volname force"
20:07 JoeJulian Then self-heal handles the rest.
20:09 tom[] joined #gluster
20:15 federicoaguirre jgreat.!
20:15 federicoaguirre thanks!
20:27 bbooth joined #gluster
20:30 shutupsquare Hi looking for help with an error im getting "[2017-01-12 13:19:37.326562] W [fuse-bridge.c:2167:fuse_writev_cbk] 0-glusterfs-fuse: 5852034: WRITE => -1 (Input/output error)" and I dont really know what action caused it, I get lots of these logged how can I start a diagnostic?
20:38 JoeJulian 1st, that's just a warning so it's probably safe to ignore
20:38 JoeJulian shutupsquare: ^
20:39 shaunm joined #gluster
20:39 JoeJulian shutupsquare: 2nd, I'd compare that client log with the brick logs and see if there's anything that correlates.
20:40 Humble joined #gluster
20:40 derjohn_mob joined #gluster
20:43 kpease joined #gluster
21:01 alvinstarr joined #gluster
21:04 MidlandTroy joined #gluster
21:07 shutupsquare Okay thanks will take a look now
21:09 abyss^ JoeJulian: That would be very suprising (such a migration;))
21:26 PaulCuzner joined #gluster
21:26 PaulCuzner left #gluster
21:27 msvbhat joined #gluster
22:02 skylar joined #gluster
22:11 bbooth joined #gluster
22:12 bowhunter joined #gluster
22:40 vbellur joined #gluster
22:54 farhorizon joined #gluster
23:09 PaulCuzner joined #gluster
23:10 farhoriz_ joined #gluster
23:31 msvbhat joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary