Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 ashiq joined #gluster
00:12 kpease joined #gluster
00:24 nbalacha joined #gluster
00:27 arif-ali joined #gluster
00:30 haomaiwang joined #gluster
00:43 bwerthmann joined #gluster
00:51 martin_pb joined #gluster
00:57 farhorizon joined #gluster
01:07 susant joined #gluster
01:08 DV__ joined #gluster
01:10 shdeng joined #gluster
01:26 dnorman joined #gluster
01:32 nbalacha joined #gluster
01:36 dnorman joined #gluster
01:39 Wizek joined #gluster
01:42 haomaiwang joined #gluster
02:05 phileas joined #gluster
02:12 derjohn_mobi joined #gluster
02:14 jvandewege_ joined #gluster
02:14 haomaiwang joined #gluster
02:33 plarsen joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 Lee1092 joined #gluster
03:02 mb_ joined #gluster
03:05 social joined #gluster
03:14 haomaiwang joined #gluster
03:16 dnorman joined #gluster
03:26 masber joined #gluster
03:27 masber joined #gluster
03:43 susant joined #gluster
03:44 jiffin joined #gluster
03:55 magrawal joined #gluster
04:04 itisravi joined #gluster
04:12 hgowtham joined #gluster
04:12 atinm joined #gluster
04:14 haomaiwang joined #gluster
04:14 poornima_ joined #gluster
04:15 Shu6h3ndu joined #gluster
04:28 prth joined #gluster
04:33 dnorman joined #gluster
04:37 ashiq joined #gluster
04:37 Gnomethrower joined #gluster
04:40 ndarshan joined #gluster
04:40 kdhananjay joined #gluster
04:42 sbulage joined #gluster
04:47 RameshN joined #gluster
04:52 sanoj joined #gluster
04:55 rafi joined #gluster
04:55 prth joined #gluster
04:57 buvanesh_kumar joined #gluster
04:59 alvinstarr joined #gluster
05:00 Prasad joined #gluster
05:03 karthik_us joined #gluster
05:10 karthik_us joined #gluster
05:11 skoduri joined #gluster
05:14 haomaiwang joined #gluster
05:17 apandey joined #gluster
05:19 hchiramm_ joined #gluster
05:24 msvbhat joined #gluster
05:26 ppai joined #gluster
05:31 rafi1 joined #gluster
05:35 mahendratech joined #gluster
05:39 rafi joined #gluster
05:39 riyas joined #gluster
05:40 k4n0 joined #gluster
05:40 mb_ joined #gluster
05:43 Klas just curous, is there any plan to make gluster support atomic mv or similar operations?
05:44 Klas I expect the issue is that metadata needs to be synced after writing and verified, which does make it seem close to impossible to make this work, just got the question from a few collegues yesterday (they want to use glusterfs for a pid file across nodes and realized that it was not atomic, which complicates things)
05:44 mahendratech joined #gluster
05:45 Klas also, http://www.gluster.org/pipermail/glu​ster-devel/2010-February/035656.html states something about " Furthermore this actually “desyncs” the server in a more complex setup"
05:45 glusterbot Title: [Gluster-devel] atomic operations fails_ (at www.gluster.org)
05:45 skoduri joined #gluster
05:45 msvbhat joined #gluster
05:45 Klas does this mean that you can actually create a split-brain?
05:45 Klas from client
05:48 kdhananjay joined #gluster
05:49 mahendratech joined #gluster
05:50 kdhananjay joined #gluster
05:50 RameshN joined #gluster
05:54 sanoj joined #gluster
05:55 mahendratech joined #gluster
05:55 sanoj joined #gluster
05:58 sanoj joined #gluster
06:00 mahendratech left #gluster
06:02 ashiq joined #gluster
06:03 itisravi joined #gluster
06:05 Karan joined #gluster
06:07 msvbhat joined #gluster
06:10 bhakti joined #gluster
06:11 karthik_us joined #gluster
06:14 haomaiwang joined #gluster
06:17 susant joined #gluster
06:19 kotreshhr joined #gluster
06:25 prth joined #gluster
06:36 susant joined #gluster
06:38 aravindavk joined #gluster
06:45 mahendratech joined #gluster
06:50 karthik_us joined #gluster
06:53 auzty joined #gluster
06:59 skoduri joined #gluster
06:59 rafi joined #gluster
07:00 [diablo] joined #gluster
07:06 mhulsman joined #gluster
07:13 sbulage joined #gluster
07:14 haomaiwang joined #gluster
07:18 mb_ joined #gluster
07:20 msvbhat joined #gluster
07:21 skoduri joined #gluster
07:26 rastar joined #gluster
07:36 Gnomethrower joined #gluster
07:43 BuBU29 joined #gluster
07:45 BuBU29 joined #gluster
07:45 BuBU29 joined #gluster
07:46 BuBU29 joined #gluster
07:47 BuBU29 joined #gluster
07:48 BuBU29 joined #gluster
07:56 mahendratech joined #gluster
07:57 ivan_rossi joined #gluster
07:59 kramdoss_ joined #gluster
08:11 k4n0 joined #gluster
08:14 jri joined #gluster
08:14 ashiq joined #gluster
08:14 haomaiwang joined #gluster
08:16 Philambdo1 joined #gluster
08:18 rastar joined #gluster
08:32 buvanesh_kumar joined #gluster
08:42 prth joined #gluster
08:44 karthik_us joined #gluster
08:46 buvanesh_kumar joined #gluster
08:46 msvbhat joined #gluster
08:47 DV__ joined #gluster
08:49 devyani7 joined #gluster
08:50 fsimonce joined #gluster
08:53 k4n0 joined #gluster
08:53 jkroon joined #gluster
08:54 Philambdo joined #gluster
08:59 kdhananjay joined #gluster
08:59 TvL2386 joined #gluster
09:01 ankitraj joined #gluster
09:03 ShwethaHP joined #gluster
09:04 Muthu joined #gluster
09:13 skoduri joined #gluster
09:14 haomaiwang joined #gluster
09:14 mahendratech left #gluster
09:18 Slashman joined #gluster
09:20 Philambdo joined #gluster
09:28 Muthu joined #gluster
09:31 skoduri joined #gluster
09:32 BuBU29 joined #gluster
09:32 fsimonce joined #gluster
09:33 fsimonce joined #gluster
09:34 panina joined #gluster
09:35 jkroon joined #gluster
09:40 mps joined #gluster
09:48 flying joined #gluster
09:49 Guest97661 hi
09:49 glusterbot Guest97661: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:53 mahendratech joined #gluster
09:56 skoduri joined #gluster
10:00 derjohn_mobi joined #gluster
10:01 bartden joined #gluster
10:01 ankit__ joined #gluster
10:01 bartden hi, is there any option in fstab to continue booting if the gluster volume is not available?
10:02 rastar joined #gluster
10:03 ashiq joined #gluster
10:04 jri joined #gluster
10:04 cloph that should be the default unless it is the root volume/at least debian is happy if it cannot mount a glusterfs volume
10:05 cloph apart from that there's explicit nofail option
10:05 cloph but should not be needed.
10:05 bartden cloph do you mean using the default option , because now i only have acl set as option in fstab
10:06 cloph while I'm using "defaults" option, that AFAIK doesn't explicitly include nofail, but still booting isn't stalled
10:07 cloph might be different when not using a glusterfs/fuse mount but nfs
10:09 bartden ok thx for the info .. i guess i also need _netdev … probably tries to mount when network is not up yet
10:11 cloph netdev didn't really work for me, as that only cares about some network, not necessarily the one you need for gluster to be up. also if possible fetch the volume info from another peer/avoid localhost, as gluster's startup might not be finished when it tries to mount/the volume might not be available locally. So fetching info from another host should be more reliable (provided that network is up...
10:12 hackman joined #gluster
10:12 k4n0 joined #gluster
10:14 bartden cloph yes, but i have a distributed setup … meaning if one node is not up the volume is not usable (because not all data is there)
10:14 haomaiwang joined #gluster
10:16 cloph ah, so then you can forget about having it mount via fstab alone/you'll need something like an additional mount -a in rc.local / in a @reboot cronjob or similar.
10:18 fsimonce joined #gluster
10:19 jri joined #gluster
10:19 bartden which will mount the gluster volume when it is available, correct?
10:20 cloph at that point it hopefully will be available, there's no dedicated way to see when a volume is up, so you have to weigh in how much you delay the mount-attempt/how many mount-attempts you scatter
10:22 bartden ok thx!
10:24 skoduri joined #gluster
10:24 DV__ joined #gluster
10:39 k4n0 joined #gluster
10:40 burn joined #gluster
10:46 Muthu joined #gluster
10:57 rastar joined #gluster
10:57 hchiramm_ joined #gluster
10:58 mahendratech left #gluster
10:58 skoduri joined #gluster
10:59 mahendratech joined #gluster
11:01 mgk joined #gluster
11:02 panina joined #gluster
11:02 mgk Hi there! I hope, someone can help me: We have a pretty big, replicated gluster volume (about 170G). We had to add new harddrives, since we ran out of space.
11:03 mgk To migrate the data I performed a replace-brick
11:03 rafi joined #gluster
11:04 mgk gluster volume replace-brick ws0 web1:/storage/bricks/ws0/ web1:/bricks/ws0/ commit force
11:05 mgk The statistics show some progress but its been 2 DAYs now and df only shows 41G on the new hard drive. Whats more confusing is that the used space on the new drive seems to fluctuate
11:05 mgk We already were at 47G used, then it dropped back down to 41G
11:06 mgk My question is: What was my mistake? Even after some googling and reading the docs, I found now way to initialize a brick with existing data or move the location of a brick on the server
11:07 mgk Starting time of crawl: Thu Dec  1 03:46:36 2016  Crawl is in progress Type of crawl: INDEX No. of entries healed: 1934861 No. of entries in split-brain: 0 No. of heal failed entries: 19931
11:08 mgk This is the current statistic
11:08 mgk I would greatly appreciate if someone knew a way to speed this up.
11:09 mgk Another problem is that while the selfheal is migrating the data, the fs is at times unusable leading to processes which are hanging in syscalls (state D in ps)
11:11 apandey joined #gluster
11:12 mgk any help would be greatly appreciated
11:13 hchiramm joined #gluster
11:13 jri joined #gluster
11:14 DV__ joined #gluster
11:14 haomaiwang joined #gluster
11:20 sanoj joined #gluster
11:21 skoduri joined #gluster
11:23 jri joined #gluster
11:39 jiffin joined #gluster
11:42 skoduri joined #gluster
11:52 haomaiwang joined #gluster
12:14 haomaiwang joined #gluster
12:15 skoduri joined #gluster
12:16 ira joined #gluster
12:18 arc0 joined #gluster
12:34 B21956 joined #gluster
12:35 hchiramm joined #gluster
12:44 kramdoss_ joined #gluster
12:46 Klas any recommendations for upping speed when rsyncing ubuntu mirrors to gluster volumes?
12:50 cloph performance quick-read: off performance.force-readdirp: off is what I have in my rsync sink gluster volume... but turn on profiling and check where the time is spent...
12:52 cloph also you can try glusterfs/fuse mount vs nfs mount
12:56 garamelek cloph
12:57 garamelek how to mount with nfs ?
12:57 garamelek do you have instruction ?
12:57 cloph depends on what version of gluster you use, for < 3.8 you can use gluster's builtin nfs, for 3.8 you have to install and configure ganesha-nfs
12:58 cloph (or share a local mount via kernel /system nfs, but that's pretty pointless)
12:59 garamelek so ganesha-nfs is better right ?
12:59 cloph yes
13:00 f0rpaxe joined #gluster
13:00 cloph although default documentation is kind of complicated, as they go straight into a high-availability config, instead of having a simple export only.
13:00 garamelek thanks , i am try to play with ganesha
13:01 garamelek also , i didnt find in documentation
13:01 garamelek when replication start ?
13:01 garamelek after filese placed to file system ? or like asynchronous mode?
13:02 johnmilton joined #gluster
13:02 cloph replication is synchronous, if gluster reurns "file was saved" to the application, the file is replicated.
13:03 nbalacha joined #gluster
13:03 cloph geo-replication is a different operation mode/for a different usecase, that then is async.
13:04 garamelek is that possible to change from synchronous to async ?
13:05 garamelek not a geo-cluster
13:06 cloph as said: geo-replication would be async, but that would be different gluster volumes, so it depends on what you actually want. default replication cannot be async.
13:07 Klas cloph: oh, interesting, and we are currently looking into nfs =)
13:08 garamelek we too , bad idea my infrastructure consist Windows based applications
13:09 cloph only Windows server versions or the pro/enterprise variants do support mounting nfs...
13:09 garamelek and i with to user windows+ glusterfs
13:09 garamelek use*
13:11 cloph for "plain"/end-user windows clients you can use samba to export a local gluster mount, but cannot use "native" mount...
13:11 kkeithley Then use samba. There's a gluster-vfs that will serve gluster storage to Windows clients
13:13 garamelek so preffer samba ?
13:14 haomaiwang joined #gluster
13:15 cloph depends on what version of windows you use/whether those can mount nfs or not.
13:15 DV__ joined #gluster
13:15 derjohn_mobi joined #gluster
13:15 garamelek enterprise versions - win 2012 standart/datacenter
13:16 garamelek both can mount nfs
13:16 cloph then no need for the additional samba/cifs layer
13:18 garamelek last question ,how windows/nfs understand when and how to switch connection to over active member of gluster is master fail ?
13:19 garamelek if master fail or reboot or unaccesable
13:20 garamelek as i understand unix based clients mount.glusterfs is understand that
13:20 msvbhat joined #gluster
13:21 unclemarc joined #gluster
13:25 cloph the fuse mount is aware, as it knows about the volume as a whole, nfs doesn't. If your nfs server goes down, your windows nfs-mount will be stale/has to wait until that server comes back.
13:26 cloph you could do the gluster-ha route using virtual ip and the watchdog thingie or whatever it is called
13:26 cloph but of course much more complicated to setup (and I have no experience with that)
13:37 armyriad joined #gluster
13:45 vbellur joined #gluster
13:50 prth joined #gluster
14:14 haomaiwang joined #gluster
14:18 jiffin joined #gluster
14:19 derjohn_mobi joined #gluster
14:23 plarsen joined #gluster
14:28 msvbhat joined #gluster
14:28 skylar joined #gluster
14:30 bwerthmann joined #gluster
14:33 kotreshhr left #gluster
14:34 squizzi joined #gluster
14:38 aravindavk joined #gluster
14:41 DV__ joined #gluster
14:53 bartden left #gluster
14:59 ashiq joined #gluster
15:00 dnorman joined #gluster
15:03 msvbhat joined #gluster
15:05 f0rpaxe joined #gluster
15:06 shyam joined #gluster
15:14 haomaiwang joined #gluster
15:19 dnorman_ joined #gluster
15:21 dnorman joined #gluster
15:24 annettec joined #gluster
15:28 jiffin joined #gluster
15:51 flying joined #gluster
16:02 wushudoin joined #gluster
16:09 Gambit15 joined #gluster
16:12 shyam joined #gluster
16:14 haomaiwang joined #gluster
16:18 msvbhat joined #gluster
16:19 ivan_rossi left #gluster
16:26 mb_ joined #gluster
16:36 shyam joined #gluster
16:37 mb_ joined #gluster
16:43 * JoeJulian wonders why this industry keeps calling it cifs when it hasn't been cifs for decades...
16:48 snehring easier to say as a word than smb, also sounds fancier
16:49 kkeithley X Windows or (X11, X Window System)   Dextron or Dexron? nuculer or nuclear?
16:58 hackman joined #gluster
17:00 JoeJulian X, Dextron, nuclear - though looking at what's coming up, I'm almost nostalgic for GW's nuculer.
17:01 MidlandTroy joined #gluster
17:01 riyas joined #gluster
17:02 farhorizon joined #gluster
17:14 haomaiwang joined #gluster
17:16 hchiramm joined #gluster
17:18 hchiramm_ joined #gluster
17:58 farhorizon joined #gluster
18:11 kkeithley 2 out of three. Dexron
18:12 kkeithley https://images-na.ssl-images-amazon.co​m/images/I/31lCDRucbgL._AC_UL130_.jpg
18:14 haomaiwang joined #gluster
18:22 ankitraj joined #gluster
18:25 DV__ joined #gluster
18:32 farhorizon joined #gluster
18:37 arc0 joined #gluster
18:38 lalatend1M joined #gluster
18:40 shruti joined #gluster
18:40 sac` joined #gluster
18:41 pkalever_ joined #gluster
18:44 lalatenduM joined #gluster
18:44 rjoseph joined #gluster
18:45 rastar joined #gluster
18:46 Karan joined #gluster
18:48 shruti joined #gluster
18:48 pkalever joined #gluster
18:49 lalatend1M joined #gluster
18:49 hchiramm_ joined #gluster
18:49 rjoseph_ joined #gluster
18:49 sac joined #gluster
19:14 haomaiwang joined #gluster
19:19 yalu joined #gluster
19:19 cliluw joined #gluster
19:25 DV__ joined #gluster
19:25 msvbhat joined #gluster
19:38 Micha2k joined #gluster
19:40 Micha2k My bricks are losing the connection to each other. The network is not the problem, an older version is running fine.
19:40 Micha2k Client log: http://paste.ubuntu.com/23569065/  Brick log: http://paste.ubuntu.com/23569067/
19:40 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:40 Micha2k Not working version: 3.8.6, 3.7.17, last working version: 3.4.2
19:41 Micha2k Each server has two bricks but only one shows "Broken pipe" errors, that's why I'm ruling out network errors
20:14 haomaiwang joined #gluster
20:18 nisroc joined #gluster
20:22 f0rpaxe joined #gluster
20:34 johnmilton joined #gluster
20:39 edong23 joined #gluster
21:04 shruti joined #gluster
21:05 lalatenduM joined #gluster
21:05 pkalever joined #gluster
21:05 rjoseph joined #gluster
21:09 shruti` joined #gluster
21:09 pkalever joined #gluster
21:09 sac joined #gluster
21:10 lalatenduM joined #gluster
21:11 rjoseph joined #gluster
21:13 shruti joined #gluster
21:14 haomaiwang joined #gluster
21:18 panina joined #gluster
21:41 raghu` joined #gluster
21:51 shyam joined #gluster
22:00 Wizek joined #gluster
22:01 dnorman joined #gluster
22:14 haomaiwang joined #gluster
23:12 JoeJulian Micha2k: The port range changed at some point, I don't recall if it was after 3.4.
23:12 JoeJulian @ports
23:12 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
23:13 Micha2k JoeJulian: but why does it work... and then not?
23:13 Micha2k the bricks are reconnecting
23:13 Micha2k disconnect happen every hour or so
23:13 JoeJulian Ah, that does sound odd.
23:13 Micha2k yes
23:13 JoeJulian Anything happen on a cron that could cause it?
23:13 Micha2k no...
23:13 Micha2k i changed NOTHING except glusterfs version
23:14 Micha2k I even re-created the volumes now...
23:14 haomaiwang joined #gluster
23:14 JoeJulian What's the client show?
23:16 Micha2k what do you mean?
23:17 Micha2k this is the volume log... http://paste.ubuntu.com/23569065/
23:17 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
23:18 Micha2k [2016-12-02 18:38:22.512262] C [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gv0-client-10: server X.X.X.58:49153 has not responded in the last 42 seconds, disconnecting.
23:18 Micha2k this happens about every hour, for different clients
23:18 JoeJulian Well, if it's not network then it's server. Check the brick logs/
23:18 JoeJulian ?
23:19 Micha2k [2016-12-02 18:38:53.703301] W [socket.c:596:__socket_rwv] 0-tcp.gv0-server: writev on X.X.X.219:49121 failed (Broken pipe)
23:19 Micha2k [2016-12-02 18:38:53.703381] W [socket.c:596:__socket_rwv] 0-tcp.gv0-server: writev on X.X.X.62:49118 failed (Broken pipe)
23:19 Micha2k [2016-12-02 18:38:53.703380] W [socket.c:596:__socket_rwv] 0-tcp.gv0-server: writev on X.X.X.107:49121 failed (Broken pipe)
23:19 Micha2k [2016-12-02 18:38:53.703424] W [socket.c:596:__socket_rwv] 0-tcp.gv0-server: writev on X.X.X.206:49120 failed (Broken pipe)
23:19 Micha2k [2016-12-02 18:38:53.703359] W [socket.c:596:__socket_rwv] 0-tcp.gv0-server: writev on X.X.X.58:49121 failed (Broken pipe)
23:19 Micha2k but! every server has two bricks
23:19 Micha2k and only one brick shows those "broken pipe" errors
23:19 Micha2k so it's not that the server loses the network connection completetly
23:20 Micha2k just one of the two bricks is somehow disconnecting
23:20 skylar joined #gluster
23:21 Micha2k then there is the etc-glusterfs-glusterd.vol.log logfile
23:21 JoeJulian SIGPIPE is caused when writing to a tcp connection that's closed.
23:21 Micha2k just full of: [2016-12-02 18:38:01.461640] E [rpcsvc.c:565:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
23:21 Micha2k [2016-12-02 18:38:12.633895] W [rpcsvc.c:270:rpcsvc_program_actor] 0-rpc-service: RPC program not available (req 1298437 330) for 134.176.18.206:49143
23:22 Micha2k but WHY is it disconnecting? :/
23:22 JoeJulian That looks like a version mismatch.
23:23 Micha2k and that's why it's disconnecting?
23:23 JoeJulian Shouldn't. That's only a warning.
23:23 JoeJulian maybe.
23:24 Micha2k I have the same version installed on all nodes
23:24 Micha2k currently 3.7.17
23:27 Micha2k The logs don't give any clue WHY the disconnects are happening....
23:27 Micha2k I don't know how to track down the problem
23:27 Micha2k Except for downgrading to 3.4.2, the last version which was working fine...
23:27 JoeJulian Ok, that error makes absolutely no sense. 1298437 is GLUSTER_FOP_PROGRAM which has been in since 3.1
23:31 Micha2k Is it possible that there are residue files from the old version?
23:31 Micha2k But even then, why should that cause disconnects?!
23:55 dnorman joined #gluster
23:58 Acinonyx joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary