Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 kovshenin joined #gluster
00:15 plarsen joined #gluster
00:17 kovshenin joined #gluster
00:43 Pupeno_ joined #gluster
00:48 nangthang joined #gluster
01:00 badone_ joined #gluster
01:01 harish_ joined #gluster
01:05 EinstCrazy joined #gluster
01:06 RedW joined #gluster
01:16 harish joined #gluster
01:19 julim joined #gluster
01:19 haomaiwang joined #gluster
01:22 suliba joined #gluster
01:28 zhangjn joined #gluster
01:30 Lee1092 joined #gluster
01:31 vimal joined #gluster
01:41 atalur joined #gluster
01:47 Sadama joined #gluster
01:55 shyam joined #gluster
01:58 atalur joined #gluster
02:00 haomaiwa_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:24 nangthang joined #gluster
02:39 theron joined #gluster
02:40 hagarth joined #gluster
02:40 bharata-rao joined #gluster
02:55 theron joined #gluster
03:01 17SADS61F joined #gluster
03:13 gildub joined #gluster
03:16 haomaiwa_ joined #gluster
03:17 nishanth joined #gluster
03:30 [7] joined #gluster
03:33 maveric_amitc_ joined #gluster
03:39 stickyboy joined #gluster
03:43 nzero joined #gluster
03:47 ramteid joined #gluster
03:48 haomaiwa_ joined #gluster
03:51 atinmu joined #gluster
03:54 itisravi joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 sakshi joined #gluster
04:01 vimal joined #gluster
04:10 shubhendu joined #gluster
04:11 kanagaraj joined #gluster
04:15 vimal joined #gluster
04:17 gem joined #gluster
04:17 ppai joined #gluster
04:21 neha_ joined #gluster
04:27 beeradb_ joined #gluster
04:33 jiffin joined #gluster
04:33 RameshN joined #gluster
04:33 pppp joined #gluster
04:34 yazhini joined #gluster
04:39 ashiq joined #gluster
04:46 suliba joined #gluster
04:46 nbalacha joined #gluster
04:50 itisravi_ joined #gluster
04:52 RameshN joined #gluster
04:56 _ndevos joined #gluster
05:01 haomaiwa_ joined #gluster
05:06 ndarshan joined #gluster
05:08 Manikandan joined #gluster
05:11 poornimag joined #gluster
05:16 Bhaskarakiran joined #gluster
05:22 ramky joined #gluster
05:24 Akee1 joined #gluster
05:24 skoduri joined #gluster
05:26 kshlm joined #gluster
05:28 atalur joined #gluster
05:28 Akee1 joined #gluster
05:33 Bhaskarakiran joined #gluster
05:40 haomaiwa_ joined #gluster
05:40 mhulsman joined #gluster
05:44 plarsen joined #gluster
05:44 kotreshhr joined #gluster
05:46 neha_ joined #gluster
05:46 kdhananjay joined #gluster
05:50 hgowtham joined #gluster
05:52 haomaiwang joined #gluster
05:53 jwd joined #gluster
05:56 jwaibel joined #gluster
05:59 Chr1st1an_ joined #gluster
06:02 haomaiwang joined #gluster
06:07 mhulsman1 joined #gluster
06:09 kshlm joined #gluster
06:15 kovshenin joined #gluster
06:24 Chr1st1an joined #gluster
06:25 mhulsman joined #gluster
06:27 jiffin1 joined #gluster
06:28 mhulsman1 joined #gluster
06:29 mhulsman joined #gluster
06:31 jtux joined #gluster
06:37 nangthang joined #gluster
06:37 raghu joined #gluster
06:40 skoduri joined #gluster
06:48 vmallika joined #gluster
06:51 poornimag joined #gluster
06:53 suliba joined #gluster
06:55 ramteid joined #gluster
06:57 ju5t joined #gluster
06:59 Chr1st1an joined #gluster
07:02 haomaiwa_ joined #gluster
07:05 maveric_amitc_ joined #gluster
07:08 Philambdo joined #gluster
07:11 deniszh joined #gluster
07:13 [Enrico] joined #gluster
07:18 LebedevRI joined #gluster
07:18 Chr1st1an joined #gluster
07:19 ivan_rossi joined #gluster
07:36 ghenry joined #gluster
07:37 jiffin1 joined #gluster
07:50 Pupeno joined #gluster
07:54 Pupeno joined #gluster
08:00 poornimag joined #gluster
08:01 ju5t joined #gluster
08:02 rafi joined #gluster
08:02 haomaiwang joined #gluster
08:04 skoduri joined #gluster
08:11 muneerse joined #gluster
08:16 muneerse joined #gluster
08:21 Norky joined #gluster
08:30 zhangjn joined #gluster
08:32 armyriad joined #gluster
08:34 Slashman joined #gluster
08:45 RayTrace_ joined #gluster
08:47 Philambdo joined #gluster
08:52 Manikandan joined #gluster
08:54 stickyboy joined #gluster
08:56 ppai joined #gluster
08:56 zhangjn joined #gluster
08:58 deepakcs joined #gluster
09:04 maveric_amitc_ joined #gluster
09:06 ashiq joined #gluster
09:06 TvL2386 joined #gluster
09:12 Philambdo joined #gluster
09:14 jiffin1 joined #gluster
09:18 muneerse joined #gluster
09:22 ashiq joined #gluster
09:26 ashiq joined #gluster
09:26 poornimag joined #gluster
09:28 jamesc joined #gluster
09:30 skoduri joined #gluster
09:30 haomaiwa_ joined #gluster
09:30 RayTrace_ joined #gluster
09:32 vikki joined #gluster
09:38 stickyboy joined #gluster
09:39 ju5t joined #gluster
09:43 maveric_amitc_ joined #gluster
09:43 RayTrace_ joined #gluster
09:47 ibotty left #gluster
09:51 haomaiwang joined #gluster
10:01 haomaiwa_ joined #gluster
10:05 mario7 joined #gluster
10:05 mario7 hello
10:05 glusterbot mario7: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:08 Chr1st1an Anyone know if there is a plan to support subnet mask on auth.allow option ?
10:16 mario7 scenario: one serwer, two JBODs, 160x SATA 8TB HDDs total. Q: In terms of performance whitch is better: couple of raid 5 groups (RAID controllers cards involved) and glusterfs on top OR simple glusterfs without any raid group?
10:22 hagarth joined #gluster
10:22 raghu joined #gluster
10:26 wolsen joined #gluster
10:26 Manikandan joined #gluster
10:32 mhulsman joined #gluster
10:41 nbalacha joined #gluster
10:46 RameshN joined #gluster
10:52 arcolife joined #gluster
10:57 ppai joined #gluster
10:59 stickyboy joined #gluster
11:00 jiffin1 joined #gluster
11:04 harish_ joined #gluster
11:07 kdhananjay joined #gluster
11:13 bluenemo joined #gluster
11:15 skoduri joined #gluster
11:16 poornimag joined #gluster
11:21 Chr1st1an Depends I guess on what kind of data you will put on it, if you have large files  that are going to be written then I would think RAID5 or RAID6 would give better performance then a single drive
11:21 Chr1st1an But I would not run raid5 on 8TB drives due to rebuild time
11:26 kdhananjay1 joined #gluster
11:27 ira joined #gluster
11:30 tty00 joined #gluster
11:31 tty00 hi all! is it possible to convert a glusterfs setup from 1*3 bricks to 2*2 (and ofc adding a new server) online and without any glitches?
11:37 EinstCrazy joined #gluster
11:37 zhangjn joined #gluster
11:39 plarsen joined #gluster
11:39 nishanth joined #gluster
11:40 shubhendu joined #gluster
11:40 deni Hi all...I'm getting this error LibgfapiException: glfs_init(117760320) failed: Transport endpoint is not connected
11:40 deni when I shutdown the gluster master node (2 other replica nodes are still up)
11:41 deni I'm testing gluster HA to see if it works correctly
11:41 deni if I shutdown the 2 other replica nodes they don't seem to affect this
11:41 deni but as I said if I shut down the master it doesn't work
11:42 deni this is what gluster volume info tells me: http://dpaste.com/2TKZZDK
11:42 glusterbot Title: dpaste: 2TKZZDK (at dpaste.com)
11:45 raghu joined #gluster
11:45 mario7 Thanks Chr1st1an.
11:45 hgowtham joined #gluster
11:47 anoopcs deni, I think you are referring to multiple volfile_server support in libgfapi. Am I right?
11:47 ndarshan joined #gluster
11:47 TvL2386 joined #gluster
11:48 deni anoopcs: I'm using gluster --mode=script volume create vol1 replica 3 ..
11:48 deni and I'm using the normal mount function in the api code
11:48 deni (python wrapper btw)
11:49 deni I was told that after intial mount the api get's info about the replicas and can connect to them when master is down
11:49 anoopcs deni,
11:50 anoopcs deni, You mean the server was down when glfs_init was invoked?
11:51 vikki joined #gluster
11:51 deni anoopcs: yeah I intentionally brought one of the nodes down
11:51 kotreshhr left #gluster
11:51 ppai joined #gluster
11:53 deni anoopcs: this is how the relevant part of the code looks like: http://dpaste.com/1BTKM6B
11:53 glusterbot Title: dpaste: 1BTKM6B (at dpaste.com)
11:56 anoopcs deni, Then I think that's expected. Because when glfs_init is being called, the server that you have specified before must be up and running inorder to fetch the volfile.
11:56 maveric_amitc_ joined #gluster
11:57 deni anoopcs: this seems to be the relevant stack trace: http://dpaste.com/10QXR5M
11:57 glusterbot Title: dpaste: 10QXR5M (at dpaste.com)
11:57 deni got disconnected there for a bit...
11:57 deni irc I mean
11:58 hagarth joined #gluster
11:58 anoopcs deni, Then I think that's expected. Because when glfs_init is being called, the server that you have specified before must be up and running inorder to fetch the volfile.
11:59 anoopcs deni, Or else you need to specify multiple volfile_servers before invoking glfs_init. This support is not yet in libgfapi.
12:00 anoopcs deni, The corresponding changes has been already proposed.
12:00 lpabon joined #gluster
12:00 raghu joined #gluster
12:04 RayTrace_ joined #gluster
12:05 jdarcy joined #gluster
12:08 maveric_amitc_ joined #gluster
12:09 Manikandan joined #gluster
12:10 deni anoopcs: i see. tnx for the help and info.
12:10 hagarth anoopcs: maybe we should open a RfE for that if we don't have one
12:12 anoopcs rastar, ^^
12:12 anoopcs rastar, Do we have an RFE for support of multiple volfile_servers in libgfapi?
12:15 rastar anoopcs: it was more of a bug
12:15 rastar RFE was closed when Harsha had fixed it in some version of 3.5 I guess
12:16 anoopcs rastar, Oh.. Ok.
12:16 skoduri rastar, but current 'glfs_init' doesn't take multiple servers right?
12:16 rastar skoduri: it does
12:16 unclemarc joined #gluster
12:16 rastar you have to call glfs_set_volfile_server multiple times before you call glfs_init
12:17 rastar glfs_init does not take even one set of volfile server
12:17 rastar it just takes fs
12:18 rastar and glfs_new takes volname
12:18 TvL2386 joined #gluster
12:18 skoduri sorry I meant that api 'glfs_set_volfile_server'..it can't  multiple hosts at present right
12:18 skoduri *list of servers
12:19 skoduri oh I see, it doesn't take the list as input
12:19 skoduri but needs to be invoked multiple times
12:19 skoduri each time with different server details?
12:21 raghu` joined #gluster
12:24 rastar skoduri: yes
12:24 rastar glfs_init iterates over them until it gets volfile
12:26 skoduri rastar, thanks..though having a single API taking the list of servers seems right way to go about
12:26 anoopcs rastar, At present this iteration is not done, right?
12:27 rastar skoduri anoopcs hagarth deni : the support was added by http://review.gluster.org/7317
12:27 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:27 rastar and then some other change broke it
12:28 rjoseph joined #gluster
12:28 rastar it has been fixed in master by http://review.gluster.org/#/c/12114/
12:28 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:29 deni rastar: is that relesaed?
12:29 deni *released
12:29 rastar deni: no, it will get released in next 3.y.z releases. I have not investigated for backports yet.
12:29 spcmastertim joined #gluster
12:29 rastar it will come in next releases of 3.5, 3.6 and 3.7
12:29 shubhendu joined #gluster
12:30 rastar which version are you using?
12:31 shyam joined #gluster
12:31 nishanth joined #gluster
12:31 deni rastar: glusterfs-common=3.6.4-1
12:31 deni (debian)
12:31 ndarshan joined #gluster
12:31 deni but I have to wait for the python wrapper to get that support as well....or implement it myself
12:32 rastar deni: yes, the latest in 3.6 is 3.6.6
12:32 Manikandan joined #gluster
12:32 deni we had memory leak and other issues with some version so I think we pinned this one intentionally
12:33 deni but I'm not entirely sure anymore
12:33 rastar deni: I have not looked at python wrappers for this function.
12:33 rastar the latest fix is a bug fix and if python wrappers exist for glfs_set_volfile_server already then no changes are required there
12:35 deni rastar: https://github.com/gluster/libgfapi-python/blob/master/gluster/api.py#L285
12:35 glusterbot Title: libgfapi-python/api.py at master · gluster/libgfapi-python · GitHub (at github.com)
12:35 deni it appears it's in master
12:35 rastar deni: then you just have to wait for this bug fix
12:36 deni rastar: cool. tnx
12:37 rastar deni: I have a easy workaround for you
12:37 rastar you don't need to wait for the fix
12:37 rastar you have tried multiple invocations of glfs_set_volfile_server?
12:38 anoopcs hagarth, I took it by mistake. As rastar said above, the support was added by http://review.gluster.org/7317 and some other changes broke it and it got fixed by http://review.gluster.org/#/c/12114/.
12:38 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:39 rastar workaround: 1. fs = glfs_new, 2. glfs_set_volfile_server(fs, "tcp", server1.com, 24007) 3. glfs_set_volfile_server(fs, "tcp", server2.com, 24007) 4. glfs_set_volfile_server(fs, "tcp", server3.com, 24007) 5. glfs_init
12:39 theron joined #gluster
12:40 rastar glfs_init should succeed even if server1.com and server2.com are down
12:40 hagarth anoopcs: got it, thanks!
12:40 anoopcs hagarth, np.
12:47 atalur joined #gluster
12:50 mpietersen joined #gluster
12:58 deni rastar: I haven't....I would have to change the python wrapper though
12:58 B21956 joined #gluster
12:58 tru_tru joined #gluster
12:58 rastar deni: why would you need the change?
12:58 deni because that's where the glfs_set_volfile_server is invoked
12:59 rastar you mean you python application?
12:59 deni rastar: no, by the library we're using: https://github.com/gluster/libgfapi-python/blob/master/gluster/gfapi.py#L439
12:59 glusterbot Title: libgfapi-python/gfapi.py at master · gluster/libgfapi-python · GitHub (at github.com)
13:01 rastar deni, Oh got it
13:01 rastar I did not know that mount call in wrapper over fs, set_volfile_server and init
13:01 rastar you are right, you will need the change
13:02 rastar deni: you will need the change even after the bug fix
13:03 mhulsman joined #gluster
13:03 rastar deni: I need to leave now. ping me if you have more questions
13:05 mhulsman1 joined #gluster
13:09 jiffin1 joined #gluster
13:11 shaunm joined #gluster
13:14 arcolife joined #gluster
13:16 deni rastar_afk: thank you for you help. I have all the pieces together now to figure out how to mitigate this before doing the upgrade and proper fix later.
13:25 skoduri joined #gluster
13:29 maserati joined #gluster
13:34 ju5t joined #gluster
13:39 muneerse joined #gluster
13:42 bennyturns joined #gluster
13:45 skylar joined #gluster
13:46 dgandhi joined #gluster
13:46 hamiller joined #gluster
13:49 ndarshan joined #gluster
13:51 nbalacha joined #gluster
13:52 jamesc joined #gluster
14:11 klaxa|work joined #gluster
14:13 theron joined #gluster
14:22 shyam joined #gluster
14:29 David_Varghese joined #gluster
14:29 thoht joined #gluster
14:29 thoht hi
14:29 glusterbot thoht: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:29 thoht sometimes i can see gluster file with ?????????? ? ?    ?             ?            ? vm-100-disk-2.qcow2
14:29 thoht i m accessing it through a mount point type glusterfs
14:30 thoht on the gluster brick2; it is normal
14:33 thoht not split brain
14:34 haomaiwang joined #gluster
14:36 atalur joined #gluster
14:41 hurdman_begins joined #gluster
14:42 hurdman_begins hi, i try to use aufs inside a glusterfs
14:42 hurdman_begins it fails with : http://pastebin.ca/3196541
14:42 glusterbot Title: pastebin - Miscellany - post number 3196541 (at pastebin.ca)
14:42 hurdman_begins any idea ?
14:43 hurdman_begins aufs works well inside my others directory that are not glusterfs
14:47 kkeithley I don't see any indication that aufs supports extended attributes. Does it? Gluster needs extended attributes, and is fairly well documented as such.
14:49 hurdman_begins kkeithley: i mount aufs on top of a gluster, not gluster on top of aufs
14:49 spcmastertim joined #gluster
14:50 ayma joined #gluster
14:52 hurdman_begins i have found, it's a fuse issue
14:53 hurdman_begins so, gluster is not the good techno for me
14:53 hurdman_begins regards
14:53 hurdman_begins left #gluster
14:53 lbarfield Does anyone know where I can look to see which files were "skipped" in a geo-rep link?
14:53 nishanth joined #gluster
14:53 lbarfield I've got 170165 "FILES SKIPPED" in the geo-rep status right now.  I'd like to know what those are so I can fix it.
15:01 haomaiwa_ joined #gluster
15:01 nbalacha joined #gluster
15:03 kovshenin joined #gluster
15:07 ju5t joined #gluster
15:12 kovsheni_ joined #gluster
15:13 kovsheni_ joined #gluster
15:16 RayTrace_ joined #gluster
15:18 kovshenin joined #gluster
15:19 asdf3 left #gluster
15:25 JoeJulian lbarfield: Have you looked in the logs? You should at least have the warning, "changelogs %s could not be processed - moving on..." (according to the source)
15:28 lbarfield JoeJulian: That should be in the geo-rep master log for the volume in question, correct?
15:29 JoeJulian Seems like the most likely place.
15:29 JoeJulian fyi, they are marked to be retried at least.
15:30 gem joined #gluster
15:32 hagarth lbarfield: are you running 3.7.x?
15:32 JoeJulian No, that output is only in 3.6.
15:32 lbarfield hagarth: No, latest 3.6.5
15:32 skoduri joined #gluster
15:33 lbarfield I know I need to upgrade to 3.7, but I don't know that I can do that without taking the volumes / NFS mounts offline.
15:34 lbarfield And I'd like to make sure things are all in sync before making more changes.
15:34 lbarfield JoeJulian: I do see a massive list of changelogs that "could not be processed"
15:34 hagarth lbarfield: makes sense. if you need more assistance in resolving this, please drop a note on gluster-users and one of the developers will be able to offer some assistance.
15:34 lbarfield I guess I'll need to go through them all individually to figure out which files are missing.
15:35 lbarfield hagarth: Already dropped a message there as well.
15:36 hagarth lbarfield: what's the subject line of that message?
15:36 lbarfield Just trying to get this back on track ASAP.  Spent most of the day yesterday just getting it to go back to a non-faulty state.  Lots of "Device Busy" errors
15:36 lbarfield hagarth: Geo-Replication "FILES SKIPPED"
15:36 lbarfield I'm actually still getting Device Busy errors on another slave.
15:37 lbarfield I've found two or three mailing list posts with that problem, that were never responded to.
15:37 lbarfield So I'm assuming no one knows how to fix it.
15:37 hagarth lbarfield: unable to locate the post. when was this sent?
15:37 lbarfield hagarth: 30 minutes ago, probably still awaiting moderator approval
15:38 lbarfield I just asked the same question I did here, where to find the list of "skipped" files.
15:38 lbarfield Once I get that resolved I'll move on to fixing the other faulty link with "Device Busy"
15:40 stickyboy joined #gluster
15:40 semiautomatic joined #gluster
15:40 lbarfield If anyone knows a good consultant that can fix this stuff let me know.
15:41 semiautomatic joined #gluster
15:41 lbarfield I'd rather pay someone to get it working again than bang my head against it for the rest of the week.
15:41 hagarth lbarfield: just approved your message.
15:41 lbarfield hagarth: thanks
15:56 shyam joined #gluster
16:01 haomaiwa_ joined #gluster
16:02 ju5t joined #gluster
16:03 cholcombe joined #gluster
16:06 ira joined #gluster
16:15 RayTrace_ joined #gluster
16:23 jiffin joined #gluster
16:25 lbarfield Yeah, so can anyone recommend a good U.S. based consulting firm for Gluster?
16:25 calavera joined #gluster
16:30 skoduri joined #gluster
16:32 gem joined #gluster
16:39 Champi_ joined #gluster
16:39 morse_ joined #gluster
16:39 jotun_ joined #gluster
16:41 jiffin joined #gluster
16:41 monotek1 joined #gluster
16:43 kotreshhr joined #gluster
16:44 johndescs_ joined #gluster
16:44 atrius_ joined #gluster
16:44 marlinc_ joined #gluster
16:45 stickyboy_ joined #gluster
16:46 sloop joined #gluster
16:47 RayTrace_ joined #gluster
16:48 ivan_rossi left #gluster
16:50 stickyboy joined #gluster
16:50 arcolife joined #gluster
16:57 rafi joined #gluster
16:58 jiffin joined #gluster
16:59 shortdudey123 joined #gluster
16:59 deni_ joined #gluster
16:59 harish_ joined #gluster
16:59 rp_ joined #gluster
16:59 Rydekull joined #gluster
16:59 Telsin joined #gluster
17:00 Rapture joined #gluster
17:01 haomaiwa_ joined #gluster
17:07 klaas joined #gluster
17:07 telmich joined #gluster
17:07 telmich joined #gluster
17:07 JamesG joined #gluster
17:07 JamesG joined #gluster
17:07 malevolent joined #gluster
17:07 vincent_vdk joined #gluster
17:07 rmgroth joined #gluster
17:07 nhayashi joined #gluster
17:07 dlambrig_ joined #gluster
17:07 skoduri joined #gluster
17:07 mbukatov joined #gluster
17:07 shortdudey123 joined #gluster
17:07 kenansulayman joined #gluster
17:08 _liquid_ joined #gluster
17:08 Gugge joined #gluster
17:08 lkoranda joined #gluster
17:08 Guest96084 joined #gluster
17:08 rehunted joined #gluster
17:10 dgandhi joined #gluster
17:10 dastar joined #gluster
17:10 deni_ joined #gluster
17:10 harish_ joined #gluster
17:10 rp_ joined #gluster
17:10 Rydekull joined #gluster
17:10 Telsin joined #gluster
17:13 nokiomanz joined #gluster
17:14 rafi1 joined #gluster
17:15 jiffin joined #gluster
17:18 nokiomanz Hi all, If someone can answer or point me to the right documentation that would be awesome! I am testing glusterfs. I created a 2 node in replicate mode. When I mount the brick on the client and access it it work just fine. I create stuff and it appear on both node. When I shutdown of reboot 1 node. The client hand under the mount point until the node is back online. This does not happen if it is a 3 node setup.
17:23 JoeJulian Sounds like quorum.
17:23 JoeJulian Did you check your logs?
17:25 nokiomanz would I get more info on the client or server side?
17:25 nokiomanz I will go re run my test and see how it goes
17:25 nokiomanz is there a doc that state how quorum work by default?
17:28 bennyturns joined #gluster
17:28 rafi joined #gluster
17:28 rafi joined #gluster
17:29 skylar1 joined #gluster
17:31 JoeJulian nokiomanz: It didn't used to be enabled by default, but I understand some aspects of it are, now. It's somewhat confusing and undocumented at this point.
17:31 JoeJulian I would probably look in the client first.
17:34 rafi1 joined #gluster
17:51 nokiomanz JoeJulian, ok i will go take a look into that thanks !
17:51 nokiomanz brb
17:53 rafi joined #gluster
17:54 neofob joined #gluster
18:01 haomaiwa_ joined #gluster
18:02 Philambdo joined #gluster
18:12 kotreshhr left #gluster
18:12 deniszh joined #gluster
18:15 RayTrace_ joined #gluster
18:16 skylar joined #gluster
18:20 haomaiwa_ joined #gluster
18:40 nokiomanz JoeJulian, Using a 3 node setup if i do a stop of both glusterd and glusterfsd and consult the client log I can see that the quorum is met. If I do the same on a second node. The client log show that the quorum is not met and it turn the volume in RO.
18:40 nokiomanz That part is fine
18:40 nokiomanz what confuses me
18:40 nokiomanz is when all 3 node are up and I just do a reboot of the server.
18:40 nokiomanz then the client hang
19:16 dlambrig_ joined #gluster
19:17 Philambdo joined #gluster
19:26 semiosis nokiomanz: ,,(ping timeout)
19:26 glusterbot nokiomanz: I do not know about 'ping timeout', but I do know about these similar topics: 'ping-timeout'
19:26 semiosis nokiomanz: ,,(ping-timeout)
19:26 glusterbot nokiomanz: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
19:26 semiosis that may be why clients hang
19:26 nokiomanz ok so the reboot is fast. which might be why the mount stop the hang as soon as the node is up and running again.
19:27 nokiomanz I was doing reboot to "simulate" if a node die and come back later
19:27 semiosis check client logs for more info
19:29 nokiomanz mind if i link a pastebin of my client log?
19:33 nokiomanz http://pastebin.com/nC4krwut In case this talk more to you than in does to me for now :p
19:33 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:33 nokiomanz http://fpaste.org/279252/
19:33 glusterbot Title: #279252 Fedora Project Pastebin (at fpaste.org)
19:42 neofob joined #gluster
19:43 shaunm joined #gluster
19:50 dlambrig_ joined #gluster
19:52 rafi joined #gluster
20:05 Philambdo joined #gluster
20:18 calavera joined #gluster
20:58 theron joined #gluster
21:05 skylar joined #gluster
21:08 plarsen joined #gluster
21:11 DV joined #gluster
21:29 timotheus1 joined #gluster
21:39 stickyboy joined #gluster
21:42 B21956 joined #gluster
21:44 Chr1st1an Any big issues going from Glusterfs 3.4 to 3.7.1 in one jump other then it has to be done offline ?
21:55 neofob joined #gluster
22:01 TheSeven joined #gluster
22:07 DV joined #gluster
22:20 wonko so, in reading about gluster it seems that small file performance is an issue. Everything I was reading was refering to 3.3/3.4 though. has 3.7 gotten any better about small files?
22:20 JoeJulian Chr1st1an: Edit /etc/glusterfs/glusterd.vol to contain this line: "option rpc-auth-allow-insecure on" and also "gluster volume set <volname> server.allow-insecure on" for your volumes. Volumes will need stopped and started after that change.
22:21 JoeJulian define "small file" and what your issue is with them.
22:23 Chr1st1an JoeJulian: Hmm
22:26 badone joined #gluster
22:26 Chr1st1an Is that in regards to using a 3.7 client against a 3.4 gluster volume ?
22:29 wonko JoeJulian: 14GB of mostly 26k files
22:29 abyss^ joined #gluster
22:29 wonko JoeJulian: from everything I've ready performance is abysmal with lots of small files
22:30 wonko 650k files (just checked)
22:34 cholcombe joined #gluster
22:36 arcolife joined #gluster
22:36 JoeJulian Chr1st1an: I think it's just in general.
22:39 Chr1st1an Ok , will have to look into that thanks :)
22:39 JoeJulian wonko: 26k is 3 jumbo frames. Add another rtt in front of that for consistency checks. So there's going to be a noticeable difference in latency since you're effectively adding another 25% to it. Without eliminating consistency, I don't know how you would get around that.
22:40 JoeJulian @lucky CAP theorem
22:40 glusterbot JoeJulian: https://en.wikipedia.org/wiki/CAP_theorem
22:41 JoeJulian Application side solutions could include keeping files open or caching, still a potential consistency problem but one that's isolated instead of system-wide.
22:58 theron joined #gluster
23:00 jobewan joined #gluster
23:13 gildub joined #gluster
23:15 JoeJulian To be clear and fair, though wonko, I'm just a user. There's much smarter people than I working on performance improvements. Perhaps they have figured out a way to get around that which I haven't thought of. If so, it's certainly worth testing 3.7 to find out if it suits your use case.
23:17 zhangjn joined #gluster
23:31 haomaiwa_ joined #gluster
23:50 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary