Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 Alghost joined #gluster
00:27 hchiramm_ joined #gluster
00:27 pdrakeweb joined #gluster
00:29 farhorizon joined #gluster
01:19 shyam joined #gluster
01:32 shdeng joined #gluster
01:46 derjohn_mob joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:59 kukulogy joined #gluster
02:02 wadeholler joined #gluster
02:03 harish_ joined #gluster
02:08 nishanth joined #gluster
02:10 wadeholler joined #gluster
02:22 Lee1092 joined #gluster
02:49 kshlm joined #gluster
03:01 kukulogy joined #gluster
03:09 karnan joined #gluster
03:09 unforgiven512 joined #gluster
03:20 magrawal joined #gluster
03:21 muneerse joined #gluster
03:24 kukulogy Hi I'm trying to install Gluster Native Client. How do I enable Infiniband verbs? Its default value is no when I run ./configure.
03:34 atinm joined #gluster
03:38 mchangir joined #gluster
03:42 kramdoss_ joined #gluster
03:56 ppai joined #gluster
03:59 mchangir joined #gluster
04:00 hchiramm joined #gluster
04:03 msvbhat joined #gluster
04:04 itisravi joined #gluster
04:05 poornimag joined #gluster
04:05 karnan joined #gluster
04:11 rafi joined #gluster
04:18 nbalacha joined #gluster
04:29 aspandey joined #gluster
04:31 shubhendu joined #gluster
04:32 nehar joined #gluster
04:41 atalur joined #gluster
04:42 sanoj joined #gluster
04:45 karnan joined #gluster
04:48 sakshi joined #gluster
04:51 msvbhat joined #gluster
05:01 kdhananjay joined #gluster
05:06 jiffin joined #gluster
05:07 prasanth joined #gluster
05:09 hchiramm joined #gluster
05:21 satya4ever joined #gluster
05:28 Apeksha joined #gluster
05:28 Manikandan joined #gluster
05:31 mchangir joined #gluster
05:32 Bhaskarakiran joined #gluster
05:34 hgowtham joined #gluster
05:35 mhulsman joined #gluster
05:40 md2k joined #gluster
05:49 kotreshhr joined #gluster
05:55 ppai joined #gluster
05:55 hchiramm joined #gluster
05:57 msvbhat joined #gluster
06:00 sakshi joined #gluster
06:06 ramky joined #gluster
06:15 ju5t joined #gluster
06:17 atalur joined #gluster
06:18 anil_ joined #gluster
06:18 Bhaskarakiran_ joined #gluster
06:19 kaushal_ joined #gluster
06:24 loadtheacc joined #gluster
06:28 pur_ joined #gluster
06:29 loadtheacc left #gluster
06:29 karthik_ joined #gluster
06:34 loadtheacc joined #gluster
06:34 msvbhat joined #gluster
06:42 devyani7_ joined #gluster
06:43 jwd joined #gluster
06:45 rastar joined #gluster
06:50 Saravanakmr joined #gluster
06:54 ashiq joined #gluster
07:03 aspandey joined #gluster
07:04 opthomasprime joined #gluster
07:05 muneerse2 joined #gluster
07:11 derjohn_mob joined #gluster
07:11 mhulsman joined #gluster
07:14 kovshenin joined #gluster
07:15 kaushal_ joined #gluster
07:18 karnan joined #gluster
07:21 bwerthmann joined #gluster
07:24 muneerse joined #gluster
07:26 MikeLupe joined #gluster
07:29 fsimonce joined #gluster
07:29 muneerse joined #gluster
07:35 muneerse joined #gluster
07:36 hybrid512 joined #gluster
07:36 hybrid512 joined #gluster
07:38 muneerse2 joined #gluster
07:41 mhulsman joined #gluster
07:43 muneerse joined #gluster
07:44 muneerse2 joined #gluster
07:47 armyriad joined #gluster
07:49 bwerthmann joined #gluster
07:55 wnlx joined #gluster
07:56 derjohn_mob joined #gluster
08:04 hackman joined #gluster
08:11 mhulsman1 joined #gluster
08:13 Wizek joined #gluster
08:15 kaushal_ joined #gluster
08:18 Slashman joined #gluster
08:39 Seth_Karlo joined #gluster
08:43 shdeng joined #gluster
08:45 Jules-2 joined #gluster
08:45 Jules-2 can anybody explain me this new error (glusterfs 3.7.13): [2016-07-20 08:33:14.501923] I [dict.c:473:dict_get] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.13/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x117) [0x7feb256aa017] -->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.13/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x29e) [0x7feb25490afe] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get+0x93) [0x7feb2cb55373]
08:45 Jules-2 ) 0-dict: !this || key=system.posix_acl_default [Invalid argument]
08:45 glusterbot Jules-2: ('s karma is now -145
08:46 ivan_rossi joined #gluster
08:46 ivan_rossi left #gluster
08:49 ndevos Jules-2: is that an error (well, info message) that did not happen in 3.7.12, or what earlier version?
08:49 ndevos @ppa
08:49 glusterbot ndevos: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
08:49 jiffin ndevos: it seems dictionary passed to posix_acl_lookup is NULL
08:50 Jules-2 ndevos: thats brand new since this release and i also having issue with quorum after upgrading to that release. often the clients can't write to nfs share because of read-only but all three nodes are up and running.
08:50 shdeng joined #gluster
08:51 Jules-2 which is very weird: [afr-transaction.c:789:afr_handle_quorum] 0-netshare-replicate-0: /www/XXXX/Cache/Code/fluid_template/Standalone_template_xxx.php: Failing UNLINK as quorum is not met
08:52 nehar_ joined #gluster
08:56 nishanth joined #gluster
08:56 ndevos Jules-2: I'm not sure about the quorum bits, maybe kdhananjay can poke itisravi and see if it is something familiar
08:57 ndevos jiffin: yeah, but was there a change between 3.7.12 and 3.7.13 in relation to POSIX ACLs?
08:58 MikeLupe2 joined #gluster
09:00 nehar joined #gluster
09:00 PaulCuzner joined #gluster
09:01 MikeLupe joined #gluster
09:11 morsik joined #gluster
09:11 morsik Hi, does GlusterFS supports KSM?
09:12 Guest_83873 joined #gluster
09:13 Guest_83873 Allah is doing
09:13 Guest_83873 sun is not doing Allah is doing
09:13 Guest_83873 moon is not doing Allah is doing
09:13 Guest_83873 stars are not doing Allah is doing
09:13 Guest_83873 planets are not doing Allah is doing
09:13 Guest_83873 galaxies are not doing Allah is doing
09:14 morsik oh, get out.
09:14 Guest_83873 oceans are not doing Allah is doing
09:14 Guest_83873 mountains are not doing Allah is doing
09:14 morsik hagarth, JoeJulian, purpleidea, scubacuda
09:14 morsik hagarth, JoeJulian, purpleidea, semiosis: ↑ killhim.
09:14 Guest_83873 trees are not doing Allah is doing
09:15 Guest_83873 mom is not doing Allah is doing
09:15 Guest_83873 dad is not doing Allah is doing
09:15 Guest_83873 boss is not doing Allah is doing
09:15 morsik Allah is doing terrorists attacks.
09:16 Guest_83873 job is not doing Allah is doing
09:16 Guest_83873 dollar is not doing Allah is doing
09:16 Guest_83873 degree is not doing Allah is doing
09:16 Guest_83873 medicine is not doing Allah is doing
09:16 Jules-2 join #ircop
09:16 Guest_83873 customers are not doing Allah is doing
09:17 morsik oh god…
09:17 ndevos misc, nigelb, kshlm: maybe you have the powah? ^
09:17 Guest_83873 you can not get a job without the permission of Allah
09:17 mhulsman joined #gluster
09:18 Guest_83873 you can not get married without the permission of Allah
09:19 Guest_83873 nobody can get angry at you without the permission of Allah
09:19 kshlm ndevos, I don't have it. Could glusterbot do it?
09:19 Wizek joined #gluster
09:20 ndevos kshlm: yes, but only the glusterbot admins
09:20 Guest_83873 light is not doing Allah is doing
09:20 Guest_83873 fan is not doing Allah is doing
09:20 morsik there's should be voting possibility on glusterbot… :<
09:21 Guest_83873 businessess are not doing Allah is doing
09:21 Guest_83873 america is not doing Allah is doing
09:21 Guest_83873 fire can not burn without the permission of Allah
09:21 Guest_83873 knife can not cut without the permission of Allah
09:22 misc ndevos: nope, but justin could
09:22 misc or hagarth
09:22 morsik looks like they are not here :<
09:22 nigelb Let me find a freenode admin.
09:22 morsik wrote to 3 of them already…
09:23 morsik no response :D
09:23 Guest_83873 rulers are not doing Allah is doing
09:23 kshlm morsik, they're all in US timezones
09:23 Guest_83873 governments are not doing Allah is doing
09:23 morsik ah… damn.
09:23 misc so I guess we can just ignore and that's it
09:23 misc (if I could do the same from heat)
09:24 Guest_83873 sleep is not doing Allah is doing
09:24 Guest_83873 hungner is not doing Allah is doing
09:24 morsik yep… added to ignorelist.
09:24 morsik So, my problem again :D
09:24 morsik Hi, does GlusterFS supports KSM?
09:24 Guest_83873 food does not take away the hunger Allah takes away the hunger
09:24 MikeLupe2 joined #gluster
09:25 muneerse joined #gluster
09:25 Guest_83873 water does not take away the thirst Allah takes away the thirst
09:25 Guest_83873 seeing is not doing Allah is doing
09:25 Guest_83873 hearing is not doing Allah is doing
09:25 Guest_83873 seasons are not doing Allah is doing
09:25 Guest_83873 weather is not doing Allah is doing
09:26 Guest_83873 humans are not doing Allah is doing
09:26 Guest_83873 animals are not doing Allah is doing
09:26 Guest_83873 the best amongst you are those who learn and teach quran
09:26 Guest_83873 one letter read from book of Allah amounts to one good deed and Allah multiplies one good deed ten times
09:27 ndevos morsik: KSM? the thing that KVM uses for reducing the number of copies of the same memory?
09:27 morsik ndevos: exactly. This is kernel feature, but opt-in.
09:27 ndevos morsik: I thought that was a Linux memory feature, not really tied to processes
09:27 morsik ndevos: yep, but it's opt-in by process.
09:28 ndevos morsik: ah... well, if it is opt-in, I'm sure Gluster does not do it
09:28 ndevos morsik: if it requires some fiddling in /proc/... , you may be able to set it up, but I doubt you get a real benefit
09:28 nigelb There we go. Disconnected by services. One of the pings to freenode admins had effect.
09:29 ndevos indeed, thanks to whoever did that :)
09:29 atinm nigelb++
09:29 glusterbot atinm: nigelb's karma is now 1
09:29 morsik nigelb: yeah, some admin wrote to me about that.
09:30 nigelb morsik: thank you :)
09:30 jiffin ndevos: IMO , there were no changes in posix_acl xlator, may be some one b/w posix and posix_acl is the culprit
09:32 jiffin thank you morshik++ as well
09:32 glusterbot jiffin: morshik's karma is now 1
09:32 morsik jiffin: tabfali :>
09:32 jiffin sorry morsik++
09:32 morsik tabfail haha :D
09:32 glusterbot jiffin: morsik's karma is now 1
09:33 cloph woes with georeplication once again... The link between the master and the copy is so slow, and the data is in the range of terrabytes.. Unfortunately changing (rotating backups) - the problem now is that it seems stuck, keeps retrying "incomplete sync, retrying changelogs: XSYNC-CHANGELOG.<timestamp>" in a much slower rate than new logs are generated.
09:33 morsik nigelb: at least uWSGI docs says it's opt-in: http://uwsgi-docs.readthedocs.io/en/latest/KSM.html
09:33 glusterbot Title: Using Linux KSM in uWSGI uWSGI 2.0 documentation (at uwsgi-docs.readthedocs.io)
09:33 cloph Can I tell it to just ignore any failures while in the initial hybrid crawl? I mean like this it will never catch up...
09:34 soupnanodesukar joined #gluster
09:37 MikeLupe joined #gluster
09:39 jiffin1 joined #gluster
09:40 Wizek joined #gluster
09:48 rastar joined #gluster
09:50 mhulsman joined #gluster
09:54 Muthu_ joined #gluster
09:57 kaushal_ joined #gluster
10:01 arcolife joined #gluster
10:15 rastar joined #gluster
10:19 Alghost_ joined #gluster
10:19 itisravi joined #gluster
10:22 msvbhat joined #gluster
10:22 karthik_ joined #gluster
10:24 kshlm joined #gluster
10:26 PaulCuzner joined #gluster
10:26 MikeLupe joined #gluster
10:39 Bhaskarakiran_ joined #gluster
10:49 aspandey_ joined #gluster
10:51 robb_nl joined #gluster
10:54 msvbhat joined #gluster
11:13 kotreshhr joined #gluster
11:13 johnmilton joined #gluster
11:20 Bhaskarakiran joined #gluster
11:21 Bhaskarakiran joined #gluster
11:27 kshlm joined #gluster
11:28 jiffin joined #gluster
11:31 kshlm joined #gluster
11:32 aravindavk joined #gluster
11:43 kkeithley Gluster Community Meeting in ~15 minutes in #gluster-meeting
11:47 wadeholler joined #gluster
11:54 jith_ joined #gluster
11:55 Saravanakmr joined #gluster
11:56 jith_ hi all, I am trying to deploy glusterfs using gdeploy. I am trying with 2 * 2-create-volume-with-backend.conf.. In the remote gluster servers, logical volume is getting created. I could see it through lvsdisplay command.. but after LV creation, it is struck with the following error. “failed: [10.184.49.114] (item={u'lv': u'lv1', u'pool': u'pool1', u'vg': u'vg1'}) => {"failed": true, "item":...
11:56 jith_ ...{"lv": "lv1", "pool": "pool1", "vg": "vg1"}, "msg": "  \"vg1/vg1/pool1\": Invalid path for Logical Volume\n  Volume group \"None\" not found\n  Skipping volume group None\n", "rc": 5}
11:56 jith_
11:56 jith_ Please guide
11:57 jiffin sac: ^^
12:00 kkeithley #startmeeting Gluster community weekly meeting
12:00 kkeithley meh
12:00 ShwethaHP joined #gluster
12:03 jdarcy joined #gluster
12:03 Jacob843 joined #gluster
12:14 nehar joined #gluster
12:16 raghug joined #gluster
12:19 B21956 joined #gluster
12:21 B21956 joined #gluster
12:27 unclemarc joined #gluster
12:27 kaushal_ joined #gluster
12:30 ira joined #gluster
12:31 julim joined #gluster
12:39 shaunm joined #gluster
12:40 harish_ joined #gluster
12:48 wadeholler joined #gluster
12:53 mchangir joined #gluster
13:04 nbalacha joined #gluster
13:09 squizzi_ joined #gluster
13:16 ShwethaHP left #gluster
13:18 hagarth joined #gluster
13:21 bwerthmann joined #gluster
13:25 MikeLupe joined #gluster
13:28 kshlm joined #gluster
13:31 Triops joined #gluster
13:31 jiffin joined #gluster
13:35 Triops Have a question.... Got handed a broken glusterFS system. The prior admin took a disk sector error on one of the nodes. Rather than fixing it he just turned off gluster. Now I have a .glusterfs that is 4.4TB. The whole parition is only 7.2TB. Gluster has been turned off since July 11th. Is the data under .glusterfs useless?
13:35 nehar_ joined #gluster
13:37 jiffin Triops: if u are not using gluster anymore , then it is not required I guess
13:37 jiffin Triops: basically it contains hardlinks for every file which was present in GlusterFS
13:37 jiffin system
13:38 gluco Is the bitrot feature in gluster 3.8 able to fix the corrupted files? or it just detects the issues?
13:38 Triops I will eventually have to rebuild it. I'm totally new to gluster. This is a new client and I've never seen it before.
13:38 jiffin and plus some additional required for internal purposes
13:39 Triops should the .glusterfs directory be larger than the actual data it's pointing at?
13:39 jiffin gluco: I guess so
13:39 ndevos gluco: I think it can detect it, and then self-heal can be used to recover, but send an email to the gluster-users list to get an answer from the bitrot experts
13:40 jiffin gluco: kotreshHr can provide more details
13:42 gluco it does detect them, but does not seem to fix them (the scrub process)
13:48 gluco will ask the list, thanks
13:50 Wizek joined #gluster
14:02 dnunez joined #gluster
14:03 Apeksha joined #gluster
14:12 rwheeler joined #gluster
14:15 nbalacha joined #gluster
14:15 kramdoss_ joined #gluster
14:15 Wizek_ joined #gluster
14:24 armyriad joined #gluster
14:29 bowhunter joined #gluster
14:31 farhorizon joined #gluster
14:42 rafaels joined #gluster
14:46 ira joined #gluster
14:51 gnulnx I've got 3.7.6 installed on two servers.  I want to upgrade to 3.7.13.  Are there any upgrade commands or migration steps that need to be ran before I upgrade the binaries and start the daemons?
14:52 shyam joined #gluster
14:55 farhoriz_ joined #gluster
14:56 guhcampos joined #gluster
14:56 post-factum gnulnx: no, should go smoothly
14:58 post-factum gnulnx: just do not forget to wait for heal to be finished after each node gets upgraded, and after upgrading all clients consider bumping op-version
14:58 gnulnx post-factum: using a distributed volume, so healing shouldn't be applicable here
14:59 post-factum gnulnx: ah. ok then
15:05 wushudoin joined #gluster
15:16 kotreshhr joined #gluster
15:16 hagarth joined #gluster
15:17 bkolden3 joined #gluster
15:17 gnulnx Well the upgrade mostly went good.  GOod on the centos box.  On the freebsd box, I'm getting an error saying that it can't find changetimerecorder
15:17 gnulnx https://gist.github.com/kylejohnson/f0ad52cb99c9d7eab49bde4279024084
15:17 glusterbot Title: gist:f0ad52cb99c9d7eab49bde4279024084 · GitHub (at gist.github.com)
15:19 gnulnx is changetimerecorder part of georeplication?
15:30 ramky joined #gluster
15:36 gnulnx ah, it is part of tiering
15:48 armyriad joined #gluster
16:06 farhorizon joined #gluster
16:10 cloph jith_: no idea what gdeploy is, but the error message says you didn't specify the name of the volume group to create the logical volume on. (or the one you did specify doesn't exist...)
16:14 jith_ cloph: thanks
16:15 jith_ gdeploy is to deploy glusterfs using config files.. in config file we will specify all the details
16:19 squizzi joined #gluster
16:24 bkolden3 joined #gluster
16:29 squizzi_ joined #gluster
16:32 eclay joined #gluster
16:32 hackman joined #gluster
16:33 v12aml joined #gluster
16:34 eclay I've recently added a 3rd replicated brick running on a centos7 gluster v3.7.12-2 to an existing gluster setup running on centos6.x.  When I do a gluster volume status the new server doesn't show a self-heal daemon port  or pid.  I need some help figuring out why.
16:37 eclay http://pastebin.com/wzAHzMmv
16:37 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:39 eclay http://paste.ubuntu.com/20195818/
16:39 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:43 muneerse2 joined #gluster
16:47 post-factum eclay: what is restart glusterd?
16:47 post-factum *if
16:48 post-factum eclay: also, gluster peer status after restart, please
16:48 eclay It doesn't change weather the shd is running or not.
16:48 eclay will do.
16:49 Lee1092 joined #gluster
16:49 post-factum eclay: then, grep glusterd logs
16:50 eclay http://paste.ubuntu.com/20197136/
16:50 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:51 eclay @post-factum grep for what?
16:51 post-factum for errors regarding starting nfs and shd
16:51 shubhendu joined #gluster
16:55 eclay http://paste.ubuntu.com/20197632/
16:55 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:55 eclay I see several reaccuring entries like this.
16:55 muneerse joined #gluster
16:56 eclay Followed by these.  http://paste.ubuntu.com/20197746/
16:56 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:58 ben453 joined #gluster
17:01 derjohn_mob joined #gluster
17:02 post-factum hmm. weird
17:02 post-factum JoeJulian: ^^ ideas?
17:04 post-factum eclay: could you also inspect glustershd.log and nfs.log please?
17:04 eclay yes
17:05 rafi joined #gluster
17:05 JoeJulian Didn't you say this is not a replicated volume?
17:06 JoeJulian Oh, no. I must have mixed that up with something earlier.
17:06 eclay http://paste.ubuntu.com/20198897/
17:06 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:06 JoeJulian Are all the servers running the same version?
17:07 JoeJulian DNS resolution failed on host localhost
17:07 JoeJulian Looks like you need to fix your /etc/hosts
17:07 eclay was just looking at that.
17:08 eclay localhost is in there for both 127.0.0.1 and ::1
17:08 jiffin joined #gluster
17:10 post-factum eclay: do you use ipv6? there are some issues with ipv6+glusterfs now
17:10 eclay no it's on by default I guess.
17:11 skylar joined #gluster
17:13 post-factum hm, could that be 3.7.12 issue...
17:14 eclay post-factum the other two glusterfs servers are running 3.7.8-1 currently.
17:15 post-factum oh, mixed environment
17:15 JoeJulian D'oh!
17:15 post-factum perfect
17:15 post-factum you do not wnt that to happen :)
17:15 post-factum *want
17:15 eclay yes, we where trying to add a centos7 server to the mix so I could take down each of the 6.x boxes and upgrade them to 7.
17:16 JoeJulian I mean, it *should* work...
17:17 eclay it looks like the newest version available via centos 6 is 3.7.11-2
17:19 JoeJulian eclay: What if you add the hostname for the centos box to /etc/hosts?
17:19 JoeJulian s/centos/centos 7/
17:19 glusterbot What JoeJulian meant to say was: eclay: What if you add the hostname for the centos 7 box to /etc/hosts?
17:19 post-factum eclay: we have public 3.7.13 rpms available for el6 with extra patches, if you need them, I could provide you with a link
17:20 post-factum eclay: but that is kinda of last resort, present scheme really *should* work
17:20 eclay JoeJulian Let me try.
17:21 eclay should the host  name resolve to the IP we are using for gluster or 127?
17:22 msvbhat joined #gluster
17:23 eclay *post-factum JoeJulian - and are we talking about editing the /etc/hosts file on the centos 7 box or the other two servers?
17:23 JoeJulian centos 7
17:23 JoeJulian I typically assign the local hostname to a real IP, but it should work either way.
17:24 post-factum eclay: also check nsswitch.conf
17:24 JoeJulian +1
17:24 eclay k. there are only two items in the /etc/hostfile 127.0.0.1 localhost and the 10.4.16.19 IP for the hostname.
17:26 eclay post-factum: That file is empty nsswitch.conf
17:26 * JoeJulian raises an eyebrow
17:27 JoeJulian fascinating
17:28 JoeJulian oh, wait.. you're missing an s
17:28 * kkeithley wonders what's wrong with the EL6 3.7.13 RPMs at https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.13/RHEL/epel-6/x86_64/
17:29 kkeithley ( same as https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.13/CentOS/epel-6/x86_64/ )
17:29 glusterbot Title: Index of /pub/gluster/glusterfs/3.7/3.7.13/CentOS/epel-6/x86_64 (at download.gluster.org)
17:30 JoeJulian Meh, I can't read. I need coffee...
17:30 eclay Some days are like that.
17:37 post-factum eclay: /etc/nsswitch.conf ?
17:37 post-factum eclay: empty?
17:37 eclay post-factum: It was.
17:37 eclay I can take the content from another server and repopulate it.
17:38 post-factum eclay: i guess one definitely should do that
17:43 eclay well after recreating the nsswitch.conf file and rebooting things look better.
17:45 muneerse2 joined #gluster
17:45 post-factum much better?
17:48 eclay yes.  We will see once all the healing completes.
17:49 eclay but the gluster volume status is looking better.
17:56 post-factum okay
17:57 rwheeler joined #gluster
18:00 jwd joined #gluster
18:06 eclay Thanks post-factum and JoeJulian
18:07 post-factum np
18:07 JoeJulian You're welcome.
18:11 muneerse joined #gluster
18:12 semiosis just saw all the stuff about the spammer.  sorry i wasn't around to kick.  forgot that I had left my IRC client disconnected.  you can always ping me on twitter or email if i'm not here and you want to get my attention.
18:22 muneerse2 joined #gluster
18:23 ShwethaHP joined #gluster
18:28 chirino_m joined #gluster
18:31 bowhunter joined #gluster
18:39 muneerse joined #gluster
18:42 chirino_m joined #gluster
18:45 skylar joined #gluster
19:07 karnan joined #gluster
19:15 johnmilton joined #gluster
19:17 hchiramm joined #gluster
19:19 muneerse2 joined #gluster
19:20 johnmilton joined #gluster
19:21 ashiq joined #gluster
20:02 julim joined #gluster
20:14 hchiramm joined #gluster
20:15 muneerse joined #gluster
20:16 ahino joined #gluster
20:34 shyam joined #gluster
20:38 muneerse2 joined #gluster
20:51 muneerse joined #gluster
20:55 bwerthmann joined #gluster
21:01 muneerse2 joined #gluster
21:11 hagarth joined #gluster
21:18 ira joined #gluster
21:38 fcoelho joined #gluster
22:22 gluco joined #gluster
22:42 muneerse joined #gluster
22:52 Roland- joined #gluster
22:52 muneerse2 joined #gluster
22:52 Roland- hi folks, planning to deploy a gluster, 2 nodes. Now, question is I would like to connect to the gluster with a client but... if I set the client to node1 and node1 goes bust is there a way to failover to node2 ?
22:52 Roland- specify an alternative
23:02 JoeJulian @mount server
23:02 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
23:03 Roland- perfect thank you
23:04 Roland- anything to bulletproof myself for split brain ?
23:11 Roland- quorum
23:11 Roland- good, will use
23:24 JoeJulian Hehe, gotta love it when you answer your own questions. ;)
23:47 shyam joined #gluster
23:57 gnulnx I'd like to move my brick from server1:/ftp/bricks to server1:/storage/bricks - same server, different directory.  How do I do this?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary