Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 wushudoin joined #gluster
00:03 plarsen joined #gluster
00:09 luis_silva joined #gluster
00:31 badone_ joined #gluster
00:35 glusterbot News from newglusterbugs: [Bug 1073616] Distributed volume rebalance errors due to hardlinks to .glusterfs/... <https://bugzilla.redhat.com/show_bug.cgi?id=1073616>
00:35 glusterbot News from resolvedglusterbugs: [Bug 824286] Self-Heal of files without GFID should return I/O Error when some of the bricks are down <https://bugzilla.redhat.com/show_bug.cgi?id=824286>
00:35 glusterbot News from resolvedglusterbugs: [Bug 835423] do  not log ENOENT error found while crawling <https://bugzilla.redhat.com/show_bug.cgi?id=835423>
00:35 glusterbot News from resolvedglusterbugs: [Bug 837676] glusterfs-server-3.3.0 glustershd.log is filling up with self heal messages that fail <https://bugzilla.redhat.com/show_bug.cgi?id=837676>
00:35 glusterbot News from resolvedglusterbugs: [Bug 852406] For non-replicate type volumes, do not print brick details for "gluster volume heal <volname> info ". <https://bugzilla.redhat.com/show_bug.cgi?id=852406>
00:35 glusterbot News from resolvedglusterbugs: [Bug 857081] strange glusterfs warning error <https://bugzilla.redhat.com/show_bug.cgi?id=857081>
00:35 glusterbot News from resolvedglusterbugs: [Bug 857503] gluster volume heal $volname info sporadically reports nothing to heal <https://bugzilla.redhat.com/show_bug.cgi?id=857503>
00:35 glusterbot News from resolvedglusterbugs: [Bug 857549] brick/server replacement isn't working as documented.... <https://bugzilla.redhat.com/show_bug.cgi?id=857549>
00:35 glusterbot News from resolvedglusterbugs: [Bug 866456] gluster volume heal $<VN> full keeps increasing the No. of entries for gluster volume heal $<vn> info healed even if healing is not done <https://bugzilla.redhat.com/show_bug.cgi?id=866456>
00:35 glusterbot News from resolvedglusterbugs: [Bug 875860] Auto-healing in 3.3.1 doesn't auto start <https://bugzilla.redhat.com/show_bug.cgi?id=875860>
00:35 glusterbot News from resolvedglusterbugs: [Bug 920434] Crash in index_forget <https://bugzilla.redhat.com/show_bug.cgi?id=920434>
00:35 glusterbot News from resolvedglusterbugs: [Bug 947312] Misleading log message in afr_sh_children_lookup_done. <https://bugzilla.redhat.com/show_bug.cgi?id=947312>
00:35 glusterbot News from resolvedglusterbugs: [Bug 947824] process is getting crashed during dict_unserialize <https://bugzilla.redhat.com/show_bug.cgi?id=947824>
00:35 glusterbot News from resolvedglusterbugs: [Bug 959969] glustershd.log file populated with lot of "disconnect" messages <https://bugzilla.redhat.com/show_bug.cgi?id=959969>
00:35 glusterbot News from resolvedglusterbugs: [Bug 961307] gluster volume remove-brick is not giving usage error when volume name was not provided to the cli <https://bugzilla.redhat.com/show_bug.cgi?id=961307>
00:35 glusterbot News from resolvedglusterbugs: [Bug 964026] Gluster peer status does not show the Port No for some of the nodes in the cluster <https://bugzilla.redhat.com/show_bug.cgi?id=964026>
00:35 glusterbot News from resolvedglusterbugs: [Bug 966851] Excessive logging in Debug mode for mem_get <https://bugzilla.redhat.com/show_bug.cgi?id=966851>
00:35 glusterbot News from resolvedglusterbugs: [Bug 968301] improvement in log message for self-heal failure on file/dir in fuse mount logs <https://bugzilla.redhat.com/show_bug.cgi?id=968301>
00:35 glusterbot News from resolvedglusterbugs: [Bug 972459] heal info split-brain not printing split-brain files, if files count is less that 1024. <https://bugzilla.redhat.com/show_bug.cgi?id=972459>
00:35 glusterbot News from resolvedglusterbugs: [Bug 977797] meta-data split-brain prevents entry/data self-heal of dir/file respectively <https://bugzilla.redhat.com/show_bug.cgi?id=977797>
00:35 glusterbot News from resolvedglusterbugs: [Bug 978936] Letter case changes in gluster volume heal command <https://bugzilla.redhat.com/show_bug.cgi?id=978936>
00:35 glusterbot News from resolvedglusterbugs: [Bug 986945] Increase cli timeout for gluster volume heal  info commands <https://bugzilla.redhat.com/show_bug.cgi?id=986945>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1015990] Implementation of command to get the count of entries to be healed for each brick <https://bugzilla.redhat.com/show_bug.cgi?id=1015990>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1046624] Unable to heal symbolic Links <https://bugzilla.redhat.com/show_bug.cgi?id=1046624>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1058204] dht: state dump does not print the configuration stats for all  subvolumes <https://bugzilla.redhat.com/show_bug.cgi?id=1058204>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1113066] DHT: Log  new layout of directory generated during directory self healing <https://bugzilla.redhat.com/show_bug.cgi?id=1113066>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1086748] Add documentation for the Feature: AFR CLI enhancements <https://bugzilla.redhat.com/show_bug.cgi?id=1086748>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1037501] All the existing bricks are not marked source when new brick is added to volume to increase the replica count from 2 to 3 <https://bugzilla.redhat.com/show_bug.cgi?id=1037501>
00:36 glusterbot News from resolvedglusterbugs: [Bug 830168] Error message is inconsistent for the command "gluster volume heal <vol_name> full" when executed on multiple nodes <https://bugzilla.redhat.com/show_bug.cgi?id=830168>
00:36 glusterbot News from resolvedglusterbugs: [Bug 864963] Heal-failed and Split-brain messages are not cleared after resolution of issue <https://bugzilla.redhat.com/show_bug.cgi?id=864963>
00:36 glusterbot News from resolvedglusterbugs: [Bug 871987] Split-brain logging is confusing <https://bugzilla.redhat.com/show_bug.cgi?id=871987>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1061044] DHT - rebalance - during data migration , rebalance is  migrating files to correct sub-vol but after that it creates link files on wrong sub-vol(sub-vol having hash layout 0000000000000000) <https://bugzilla.redhat.com/show_bug.cgi?id=1061044>
00:36 glusterbot News from resolvedglusterbugs: [Bug 1063230] DHT - rebalance - when any brick/sub-vol is down and rebalance is not performing any action(fixing lay-out or migrating data) it should not say 'Starting rebalance on volume <vol-name> has been successful' . <https://bugzilla.redhat.com/show_bug.cgi?id=1063230>
00:36 glusterbot News from resolvedglusterbugs: [Bug 820555] log spelling improvement in "glusterd_handle_cli_start_volume" <https://bugzilla.redhat.com/show_bug.cgi?id=820555>
00:36 wkf joined #gluster
00:36 harish_ joined #gluster
00:42 Prilly joined #gluster
00:52 T3 joined #gluster
00:52 bala joined #gluster
01:01 Durzo joined #gluster
01:15 topshare joined #gluster
01:19 cfeller joined #gluster
01:19 dgandhi joined #gluster
01:20 Prilly joined #gluster
01:27 prilly_ joined #gluster
01:33 Durzo joined #gluster
01:44 Prilly joined #gluster
01:44 nangthang joined #gluster
01:49 rjoseph joined #gluster
02:01 rafi joined #gluster
02:02 harish joined #gluster
02:16 coredump|br joined #gluster
02:19 lpabon_ joined #gluster
02:20 CyrilPepL joined #gluster
02:20 oxidane joined #gluster
02:20 PeterA1 joined #gluster
02:20 msvbhat_ joined #gluster
02:20 dgandhi1 joined #gluster
02:21 Netbulae_DEV joined #gluster
02:21 lpabon_ joined #gluster
02:21 msvbhat_ joined #gluster
02:22 bala joined #gluster
02:22 harish joined #gluster
02:22 JoeJulian joined #gluster
02:26 badone_ joined #gluster
02:26 haomaiwang joined #gluster
02:30 DV joined #gluster
02:46 B21956 left #gluster
02:48 T3 joined #gluster
02:59 victori joined #gluster
03:02 edong23 joined #gluster
03:12 luis_silva joined #gluster
03:13 victori joined #gluster
03:15 haomai___ joined #gluster
03:19 lalatenduM joined #gluster
03:21 bharata-rao joined #gluster
03:36 figabo joined #gluster
03:36 figabo hi
03:36 glusterbot figabo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
03:40 figabo I have a server currently only has GlusterFS, but for some reason this looks restart, then put another online, then after reboot back to what it was before, is it possible?
03:51 kumar joined #gluster
03:56 ppai joined #gluster
04:02 shubhendu_ joined #gluster
04:08 rafi joined #gluster
04:09 nbalacha joined #gluster
04:10 anoopcs joined #gluster
04:30 gem joined #gluster
04:37 kanagaraj joined #gluster
04:41 kripper joined #gluster
04:43 RameshN joined #gluster
04:48 deepakcs joined #gluster
04:51 victori joined #gluster
04:57 meghanam joined #gluster
05:08 Apeksha joined #gluster
05:09 sputnik13 joined #gluster
05:15 poornimag joined #gluster
05:15 ndarshan joined #gluster
05:20 lalatenduM joined #gluster
05:22 lalatenduM_ joined #gluster
05:23 jiffin joined #gluster
05:26 victori joined #gluster
05:26 rjoseph joined #gluster
05:33 raghu joined #gluster
05:43 victori joined #gluster
05:48 spandit joined #gluster
05:50 kripper joined #gluster
05:56 Bhaskarakiran joined #gluster
05:58 overclk joined #gluster
06:01 anrao joined #gluster
06:03 victori joined #gluster
06:04 vimal joined #gluster
06:05 Bhaskarakiran joined #gluster
06:08 anil joined #gluster
06:13 qntm joined #gluster
06:17 glusterbot joined #gluster
06:18 DV joined #gluster
06:18 nshaikh joined #gluster
06:19 rjoseph joined #gluster
06:26 qntm Hi! I upgrade proxmox VE 3.1 to proxmox VE 3.4. After reboot gluster dont work. i try # gluster peer status, but gives an error: gluster: symbol lookup error: gluster: undefined symbol: xdr_gf1_cli_probe_rsp.
06:26 qntm Please, help me solve the problem!
06:27 atinmu joined #gluster
06:28 soumya joined #gluster
06:30 qntm strace - http://fpaste.org/193676/14255370/
06:36 R0ok_ joined #gluster
06:37 DV joined #gluster
06:48 kshlm joined #gluster
06:55 anrao joined #gluster
07:08 mbukatov joined #gluster
07:12 nangthang joined #gluster
07:15 jtux joined #gluster
07:17 kevein joined #gluster
07:26 vipulnayyar joined #gluster
07:30 ppai joined #gluster
07:34 [Enrico] joined #gluster
07:43 bala1 joined #gluster
07:58 Debloper joined #gluster
08:05 quydo joined #gluster
08:05 quydo hi all
08:06 quydo our gluster client has suddenly restart
08:06 quydo when I perform gluster volume status
08:07 quydo Task Status of Volume thanquoc
08:07 quydo ------------------------------------------------------------------------------
08:07 quydo Task                 : Rebalance
08:07 quydo ID                   : 37d6b37f-5444-4492-a458-d9f3de3e3322
08:07 quydo Status               : failed
08:07 quydo I tail -f /var/log/glusterfs/thanquoc-rebalance.log
08:07 quydo here is error log
08:07 rjoseph joined #gluster
08:07 quydo [2015-03-05 07:59:25.871492] I [dht-rebalance.c:1138:gf_defrag_migrate_data] 0-thanquoc-dht: migrate data called on /benmark/smallfile-master/file_dstdir/appotasg-store3/thrd_01/d_009
08:07 quydo [2015-03-05 07:59:25.873121] I [dht-rebalance.c:1362:gf_defrag_migrate_data] 0-thanquoc-dht: Migration operation on dir /benmark/smallfile-master/file_dstdir/appotasg-store3/thrd_01/d_009 took 0.00 secs
08:07 quydo [2015-03-05 07:59:25.880560] I [dht-rebalance.c:1800:gf_defrag_status_get] 0-glusterfs: Rebalance is completed. Time taken is 78.00 secs
08:07 quydo [2015-03-05 07:59:25.880584] I [dht-rebalance.c:1803:gf_defrag_status_get] 0-glusterfs: Files migrated: 0, size: 0, lookups: 6372, failures: 0, skipped: 0
08:07 quydo [2015-03-05 07:59:25.881128] W [glusterfsd.c:1095:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x320fae88fd] (-->/lib64/libpthread.so.0() [0x320fe079d1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x4053e5]))) 0-: received signum (15), shutting down
08:07 quydo what should I do in this case?
08:21 quydo anyone there?
08:22 deniszh joined #gluster
08:22 kdhananjay joined #gluster
08:26 alexkit322 joined #gluster
08:28 alexkit322 Hi, everybody. I wanted to test Gluster 3.6.2 with fio, but when I tried to run test (fio --name=randread --ioengine=gfapi --volume=testvol --brick=brickt --rw=randread --bs=10M --direct=1 --size=512M --numjobs=10 --group_reporting) it gives me error: fio: engine gfapi not loadable
08:28 alexkit322 fio: failed to load engine gfapi
08:28 alexkit322 fio: file:ioengines.c:99, func=dlopen, error=gfapi: cannot open shared object file: No such file or directory. libgfapi0 package is installed.
08:31 rafi joined #gluster
08:32 fsimonce joined #gluster
08:32 aravindavk joined #gluster
08:37 kovshenin joined #gluster
08:42 nbalacha quydo: hi
08:44 _polto_ joined #gluster
08:45 geaaru joined #gluster
08:48 wurstpropeller joined #gluster
09:04 ghenry joined #gluster
09:08 alexkit322 can anybody give an advise where is it better to ask question about work with gluster: mailing lists, stackexchange?
09:11 shubhendu_ joined #gluster
09:17 liquidat joined #gluster
09:19 poornimag alexkit322: What version of fio? Source installed or rpm? Also glusterfs-api-devel package is required
09:20 alexkit322 from rpm, 2.2.5
09:23 alexkit322 glusterfs-devel packages is installed, does glusterfs-api-devel is something different?
09:29 poornimag yes
09:30 jtux joined #gluster
09:32 ricky-ti1 joined #gluster
09:32 anrao joined #gluster
09:37 [Enrico] joined #gluster
09:43 liquidat joined #gluster
09:46 shubhendu_ joined #gluster
09:48 _polto_ joined #gluster
09:49 anrao joined #gluster
09:51 icejoran joined #gluster
09:56 icejoran hi folks, i hope someone can help me to shine some light on this issue; when i try to mount my gluster volume from a client i get this error; [glusterfsd-mgmt.c:1297:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
09:56 icejoran when i mount local on the server everything works fine...
09:57 [Enrico] joined #gluster
09:58 icejoran the client is specifically allowed to connect to the volume
09:58 icejoran anything else i might miss here?
10:07 o5k joined #gluster
10:10 Norky joined #gluster
10:12 bala1 joined #gluster
10:15 badone_ joined #gluster
10:16 Dw_Sn joined #gluster
10:17 Dw_Sn how can I fix brain-split ? i did heal command but still same files have brain split, should I sync from one volume to another ?
10:23 nangthang joined #gluster
10:28 Pupeno joined #gluster
10:34 T0aD joined #gluster
10:39 R0ok_ joined #gluster
10:42 qntm Can i use nfs server and gluster server together on node?
10:44 Dw_Sn qntm: as far as I read , yes
10:57 firemanxbr joined #gluster
11:07 icejoran can anyone give me a hint on this one? 0-glusterfs: failed to get the 'volume file' from server
11:08 nishanth joined #gluster
11:08 kdhananjay joined #gluster
11:11 o5k joined #gluster
11:13 Slashman joined #gluster
11:25 bjornar joined #gluster
11:26 harish joined #gluster
11:30 maveric_amitc_ joined #gluster
11:31 aravindavk joined #gluster
11:34 Dw_Sn joined #gluster
11:37 anrao joined #gluster
11:40 Dw_Sn joined #gluster
11:49 shubhendu_ joined #gluster
11:58 icejoran hello gluster people...?
11:58 icejoran is there really nobody who has something to say?
11:59 maveric_amitc_ joined #gluster
12:04 mbukatov joined #gluster
12:06 hchiramm icejoran, what was the command used for mounting ?
12:08 hchiramm looks like the command was used wrongly
12:10 hchiramm qntm, if this query really exist "<qntm> Can i use nfs server and gluster server together on node?" ->  its not possible.
12:11 ndevos icejoran: also, if you updated 3.6 on a non-RPM based distro, have a look at http://www.gluster.org/pipermail/gluster-users/2015-February/020781.html
12:12 hchiramm http://www.gluster.org/pipermail/gluster-users/2014-May/017255.html icejoran more reference
12:15 qntm hchiramm, but i need gluster server and nfs server on one node)
12:16 hchiramm ndevos, ^^^
12:16 ndevos qntm: what do you want to export on the server, and what nfs server do you want to use for that?
12:17 ndevos qntm: gluster comes with its own nfs server for exporting gluster volumes
12:19 qntm i have 2 disk on node0. sda and sdb, i want use sda for gluster brick and sdb for nfs export. I want to mount gluster brick on node1 and mount nfs share on node2
12:20 ndevos qntm: well, if you do not need to mount gluster volumes through nfs, you can disable the gluster/nfs server
12:21 ndevos qntm: for all gluster volumes, you need to 'gluster volume set $VOLUME nfs.disable yes'
12:21 qntm ndevos, ok, thank you
12:40 icejoran my mount command should be okay; mount.glusterfs <hostname>:<volume name> /mountpoint
12:40 icejoran and thanks for the links, i'll check them out
12:42 ira joined #gluster
12:44 mbukatov joined #gluster
12:50 RameshN joined #gluster
12:52 kanagaraj joined #gluster
12:54 Apeksha joined #gluster
12:55 kovshenin joined #gluster
12:57 icejoran this is pointed out as a working example; mount -t glusterfs vm2:VG01 /mnt/vol
12:57 Sjors Hi all
12:57 Sjors Is there a way to decide at server-side whether a volume is read-write or read-only
12:57 Sjors ?
12:58 icejoran and it is the way i'm mounting my volume, so i'd say the mount command is fine
12:58 Sjors I suppose if the brick is on a read-only volume that's pretty much forced, but is there also an easier way, like a volume option?
13:00 LebedevRI joined #gluster
13:15 nishanth joined #gluster
13:16 T3 joined #gluster
13:16 rjoseph joined #gluster
13:17 chirino joined #gluster
13:20 Leildin Hi guys
13:20 Leildin quick question: have there been any reports of lesser disk performance when upgrading to 3.6
13:23 samppah Leildin: yes, there was some discussion on mailing list couple weeks ago and 3.6 has some extra option which you can tune
13:24 samppah i'm sorry i'm online with my mobile so i can't search exact information right away
13:25 Leildin that's fine, I'm glad I'm not going mad. the 3.5 and 3.6 volume are the exact same so I was going insane
13:27 Leildin if you could link me where to find the options when you have them handy, I'm leaving the office for fiber issues (yeay)
13:32 snewpy joined #gluster
13:32 snewpy joined #gluster
13:51 theron_ joined #gluster
13:51 Dw_Sn how can i fix split brain ?
13:52 Dw_Sn i tired to sync and rsync the data to gluster via gluster client and still the same issue :s
13:52 hagarth joined #gluster
14:02 yossarianuk joined #gluster
14:04 nbalacha joined #gluster
14:04 rjoseph joined #gluster
14:05 yossarianuk hi - I am in the middle of setting up a very simple glusterfs share (2 devices) - I have formated the file system EXT4 , however I have just seen the on the homepage XFS should be used .
14:05 yossarianuk i'm really just using it a a file back-up really, do I need to reformat to XFS ? i.e is EXT4 a really bad a idea /
14:07 nishanth joined #gluster
14:08 kdhananjay joined #gluster
14:08 Dw_Sn yossarianuk: no
14:09 yossarianuk Dw_Sn: i.e ext4 can be used ?
14:10 hagarth joined #gluster
14:28 kshlm joined #gluster
14:34 rwheeler joined #gluster
14:39 kshlm joined #gluster
14:39 victori joined #gluster
14:40 lpabon joined #gluster
14:40 plarsen joined #gluster
14:49 kshlm joined #gluster
14:58 luis_silva joined #gluster
14:59 lpabon joined #gluster
15:02 deepakcs joined #gluster
15:07 bala joined #gluster
15:07 rwheeler_ joined #gluster
15:09 rwheeler_ joined #gluster
15:15 dgandhi joined #gluster
15:19 nangthang joined #gluster
15:20 kdhananjay joined #gluster
15:22 tuxle joined #gluster
15:22 tuxle hi all
15:22 tuxle can I use gluster with just 2 nodes?
15:22 tuxle or would i need 3?
15:33 yossarianuk tuxle: yes
15:33 yossarianuk tuxle: you can just have 2
15:33 yossarianuk tuxle: here is a simple guide
15:33 yossarianuk https://www.howtoforge.com/how-to-install-glusterfs-with-a-replicated-volume-over-2-nodes-on-ubuntu-14.04
15:35 tuxle yossarianuk: thank you very much
15:35 tuxle yossarianuk: will read it now :)
15:38 ira joined #gluster
15:47 hagarth joined #gluster
15:59 victori joined #gluster
16:11 meghanam joined #gluster
16:15 bennyturns joined #gluster
16:15 yossarianuk what ports (in terms of firewall) need to be opened for glusterfs >?
16:16 bennyturns yossarianuk, http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting
16:16 bennyturns iirc they are at the top there
16:16 yossarianuk bennyturns: cheers
16:16 firemanxbr yossarianuk, https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Console_Installation_Guide/sect-Installation_Guide-Sys_Requirements-Test_Software_Reqs.html
16:17 yossarianuk thanks !
16:18 elico joined #gluster
16:25 anoopcs joined #gluster
16:27 B21956 joined #gluster
16:32 yossarianuk hi - glusterfs is working fine one one server (centos5) - on another (centos7) I am unable to get glusterd.service to start
16:34 yossarianuk when I try to start it I get - Job for glusterd.service failed. See 'systemctl status glusterd.service' and 'journalctl -xn' for details. - Failed to start GlusterFS, a clustered file-system server.
16:34 yossarianuk I have erased the glusterfs packages and re-installed - same
16:35 yossarianuk how can I identify what is causing it not to start?
16:38 yossarianuk in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log I can see --> [2015-03-05 16:37:40.580362] E [rpc-transport.c:266:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.6.2/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
16:43 yossarianuk i.e - I cannot run any gluster commands as glusterd will not start
16:45 yossarianuk I think its due to the fact it was previously setup - the peer it was connecting to have had its volume reomved
16:45 yossarianuk how to comepletely reset glusterfs back to default ? I have used 'yum erase glusterfs-server'
16:47 yossarianuk how can i delete peers, etc if glusterd doesn;t start ?
16:47 yossarianuk if I do a 'glusterd --debug I can see '[glusterd-peer-utils.c:108:glusterd_peerinfo_find_by_hostname] 0-management: error in getaddrinfo: Name or service not known'
16:47 yossarianuk and '[glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: repository'
16:48 yossarianuk where can I remove the old peer/host info ?
16:48 elico left #gluster
16:54 rotbeard joined #gluster
17:02 B21956 left #gluster
17:03 B21956 joined #gluster
17:10 victori joined #gluster
17:10 andreask joined #gluster
17:11 hagarth joined #gluster
17:14 T3 joined #gluster
17:15 gem joined #gluster
17:18 meghanam joined #gluster
17:24 kripper joined #gluster
17:28 jobewan joined #gluster
17:37 lalatenduM joined #gluster
17:38 deniszh joined #gluster
17:41 rjoseph joined #gluster
17:59 victori joined #gluster
18:05 _dist joined #gluster
18:09 nshaikh joined #gluster
18:17 Rapture joined #gluster
18:21 daMaestro joined #gluster
18:22 kripper JoeJulian: Hi Joe
18:39 vipulnayyar joined #gluster
18:39 JoeJulian o/
18:40 _ndevos joined #gluster
18:45 kripper JoeJulian: we are having a nice conversation with jbrooks on #ovirt about how gluster resolves and uses hostnames
18:46 jbrooks rep 3 gluster volume: H1 H2 H3, H1 is down
18:46 kripper jbrooks: AFAIK, if we have a replica-3 volume on hosts H1, H2 and H3, and on H1 we mount H2:gluster
18:46 jbrooks You mount -t glusterfs H1:foo foo
18:46 chirino joined #gluster
18:46 JoeJulian When I tried to use it, I couldn't make it use hostnames.
18:47 kripper JoeJulian: AFAIK, if we have a replica-3 volume on hosts H1, H2 and H3, and on H1 we mount H2:gluster and H2 goes down, will the volume still be accesible from H1?
18:47 JoeJulian @hostnames
18:47 kripper JoeJulian: hostnames or IPs
18:47 * JoeJulian looks around for glusterbot
18:47 JoeJulian only hostnames
18:48 JoeJulian IPs are for people that haven't learned not to yet.
18:49 kripper JoeJulian: :-) I mean, the question is if we can access a volume mounted as H1:gluster even when H1 is down
18:49 JoeJulian access, yes. mount, no.
18:49 kripper JoeJulian: it must be mounted before H1 goes down?
18:50 kripper jbrooks: what if we use localhost:gluster on each host?
18:50 glusterbot joined #gluster
18:50 JoeJulian @mount server
18:51 JoeJulian And yes, you could mount localhost.
18:51 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
18:51 kripper great
18:51 JoeJulian Or you can use ,,(rrdns)
18:51 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
18:51 kripper jbrooks: so mounting localhost would be fine
18:51 JoeJulian I prefer rrdns in case the client isn't also a server.
18:52 jbrooks kripper, for the fuse, just not the nfs
18:52 JoeJulian right
18:52 jbrooks I use ctdb for both, since I need it for the nfs
18:52 JoeJulian mounting nfs from localhost hits some kernel memory allocation race conditions.
18:57 kripper mmm...I still don't understand why mounting localhost:gluster locally as a NFS share for ovirt-engine will not work? because of the race-condition?
18:59 JoeJulian it will work... for a while... then the machine will lock up and you'll have to reboot it.
18:59 kripper JoeJulian: is this a bug?
19:00 kripper Telsin: is this the bug you meant?
19:01 JoeJulian Well I don't think that's the kind of behavior you would want, but it's within the kernel. I know it's been discussed on lkml before, but I can't find it now.
19:01 kripper JoeJulian: Where is the difference in mounting H1:gluster on H1 and mounting localhost:gluster on H1?
19:01 JoeJulian none
19:01 JoeJulian Don't nfs mount locally.
19:01 JoeJulian Why nfs anyway?
19:02 JoeJulian It just seems like asking for trouble.
19:02 kripper JoeJulian: currently ovirt-engine is limited to nfs
19:02 kripper JoeJulian: oVirt 3.6 will support running ovirt-engine from a gluster directly
19:02 * JoeJulian goes back to openstack...
19:03 jbrooks More specifically, the self-hosted engine relies on nfs
19:03 jbrooks And every host that participates in ha accesses this nfs share
19:03 kripper right (JoeJulian is not an oVirt user)
19:03 JoeJulian I tried, really I did.
19:04 jbrooks And when you mount localhost, each writes to itself, and the replication fails, and you quickly get split brains
19:04 JoeJulian eww
19:05 jbrooks But you could also ask, why isn't HA gluster nfs done w/ mount localhost
19:05 jbrooks Downstream, this is done w/ ctdb
19:05 jbrooks I think it
19:06 jbrooks w/ fuse, the traffic is automatically spread around, that's handled by gluster, w/ gluster nfs, gluster doesn't know about/do that
19:06 kripper jbrooks: I understand...I guess the problem is with the ha-agent and gluster not handling correctly file-locks, thus the hosted-engine gets executed on different hosts, writing on different local gluster-nfs mounts causing the split-brain
19:06 squizzi left #gluster
19:06 jbrooks It's just that this isn't how gluster nfs is meant to work
19:06 JoeJulian gluster isn't handling file locks?
19:07 JoeJulian I've done the nfs lock tests before and they've always passed.
19:07 kripper JoeJulian: not sure, but it seems jbrook's split-brains are caused because the same hosted-engine-vm is writing on different hosts
19:07 jbrooks Well
19:07 jbrooks Interestingly
19:07 jbrooks You need to turn nfs locking off to have a gluster host both serve nfs and consume it
19:08 JoeJulian Ah, ok.
19:08 kripper jbrooks: and this is causing the engine to write on different hosts at the same time?
19:08 JoeJulian Are there any other network filesystems that can be used besides nfs?
19:09 jbrooks iscsi, next release gluster
19:09 jbrooks kripper, I don't think it's the engine writing on different hosts, it only writes to its localhost, but every ha agent also writes
19:10 jbrooks And they write, each thinking that they're writing to the same place, but actually, they're each writing to their own copy
19:10 kripper jbrooks: right, ha-agnet is also writing all the time
19:10 jbrooks Maybe at once, I don't know
19:10 JoeJulian I wonder...
19:10 jbrooks So you ensure that they write to the same place w/ a vip
19:11 JoeJulian Is there a way to disable fscache on nfs mounts?
19:12 jbrooks Don't know -- I haven't been digging on this, since ctdb works just fine for me
19:12 kripper JoeJulian: would it be the solution?
19:13 JoeJulian I don't know if it would. It's just a thought experiment.
19:14 JoeJulian If ctdb works, what's the question? What are we still trying to solve?
19:14 kripper JoeJulian: would rrDNS also be usefull for making sure all hosts access the same nfs share?
19:14 JoeJulian nope
19:14 kripper JoeJulian: just trying to understand things
19:15 kripper it seems like CTDB is the best solution until 3.6
19:15 jbrooks JoeJulian, you're an openstack man, eh? Have you blogged about your setup?
19:15 jbrooks Or ppl seem to like keepalived, too
19:15 JoeJulian jbrooks: I can't blog about IOs setup yet. I probably should blog about my home one.
19:16 kripper jbrooks: any particular reason you choosed CTDB instead of keepalived?
19:16 JoeJulian But the setup we've built at IO is f'ing cool.
19:16 jbrooks kripper, the Red Hat product uses ctdb for nfs ha, so I figured that was a good bet
19:17 kripper jbrooks: of course
19:18 kripper JoeJulian: I'm still worried about the issue of mounting nfs locally. Is it a known issue?
19:18 Creeture joined #gluster
19:18 kripper jbrooks: is CTDB taking care of not mounting it locally?
19:19 kripper maybe I'm not understanding correctly
19:19 jbrooks kripper, I'm using ctdb to provide a virtual ip address, and only one machine at a time has that
19:19 jbrooks and when the machine hosting it goes down, it switches to another machine
19:20 jbrooks At al times, though, the engine and all the ha agents are talking to the same machine
19:20 glusterbot News from newglusterbugs: [Bug 1194640] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.com/show_bug.cgi?id=1194640>
19:20 glusterbot News from newglusterbugs: [Bug 1198963] set errno if gf_strdup() failed <https://bugzilla.redhat.com/show_bug.cgi?id=1198963>
19:20 glusterbot News from newglusterbugs: [Bug 1199003] Avoid possibility of segfault if xl->ctx is  NULL. <https://bugzilla.redhat.com/show_bug.cgi?id=1199003>
19:20 glusterbot News from newglusterbugs: [Bug 1199053] list , wr memory has to be verified <https://bugzilla.redhat.com/show_bug.cgi?id=1199053>
19:20 glusterbot News from newglusterbugs: [Bug 1198849] Minor improvements and cleanup for the build system <https://bugzilla.redhat.com/show_bug.cgi?id=1198849>
19:20 kripper right, but is it possible that CTDB tells H1 to access the share mounted on H1? Or is this not considered to be locally mounted?
19:20 glusterbot News from resolvedglusterbugs: [Bug 864963] Heal-failed and Split-brain messages are not cleared after resolution of issue <https://bugzilla.redhat.com/show_bug.cgi?id=864963>
19:20 glusterbot News from resolvedglusterbugs: [Bug 871987] Split-brain logging is confusing <https://bugzilla.redhat.com/show_bug.cgi?id=871987>
19:20 glusterbot News from resolvedglusterbugs: [Bug 1061044] DHT - rebalance - during data migration , rebalance is  migrating files to correct sub-vol but after that it creates link files on wrong sub-vol(sub-vol having hash layout 0000000000000000) <https://bugzilla.redhat.com/show_bug.cgi?id=1061044>
19:20 glusterbot News from resolvedglusterbugs: [Bug 1063230] DHT - rebalance - when any brick/sub-vol is down and rebalance is not performing any action(fixing lay-out or migrating data) it should not say 'Starting rebalance on volume <vol-name> has been successful' . <https://bugzilla.redhat.com/show_bug.cgi?id=1063230>
19:20 glusterbot News from resolvedglusterbugs: [Bug 820555] log spelling improvement in "glusterd_handle_cli_start_volume" <https://bugzilla.redhat.com/show_bug.cgi?id=820555>
19:21 jbrooks kripper, The problem isn't that you're mounting locally, the problem is that everyone needs to be using the same mount
19:21 jbrooks I haven't experienced any lockups
19:21 kripper jbrooks: ok, I'm clear now
19:21 jbrooks My machines don't go down until I take them down
19:25 Creeture Is there a command to monitor the state of a replicate after converting a single node to replicate?
19:29 kripper jbrooks: but what about "mounting nfs from localhost hits some kernel memory allocation race conditions"? I mean, if CTDB tells H1,H2 and H3 to access the nfs-share from H1, then H1 would be accessing the nfs-share *locally*. Wouldn't it trigger the race condition?
19:29 JoeJulian Creeture: Not exactly, no. "gluster volume heal $volume info" will give a hint of whether or not it's done, but until it crawls the directory it doesn't have a list of files to be replicated.
19:29 jbrooks kripper, I haven't encountered that
19:29 JoeJulian maybe they fixed that in kernel.
19:30 JoeJulian I don't really follow kernel development.
19:30 kripper JoeJulian: how notorious was this issue?
19:30 kripper JoeJulian: but this was happening with the kernel-nfs implementation, not the gluster-nfs one, right?
19:30 JoeJulian Not very. Most people mount gluster volumes natively.
19:31 kripper JoeJulian: we too, once 3.6 is released
19:32 JoeJulian iirc, the kernel nfs client tries to allocate memory triggering a gc. The same iop that triggers the gc has memory locked in the userspace nfs service, resulting in a deadlock.
19:32 JoeJulian But I could be way off. I have always refused to use nfs.
19:33 dbruhn joined #gluster
19:33 JoeJulian I suppose as long as you have enough unallocated memory, you could probably never see it.
19:34 kripper jbrooks: are you in contact with Sandro, Sahina or the devs working on hosted-engine glusterfs support?
19:35 jbrooks kripper, Yes
19:35 kripper jbrooks: how is it going?
19:36 jbrooks kripper, I haven't looked at it at all yet
19:36 kripper jbrooks: ok, let me know when you want to do some tests
19:36 jbrooks I've been daydreaming of something more docker based for engine hosting
19:36 jbrooks kripper, will do
19:37 kripper JoeJulian: thanks! we are going back to #ovirt
20:06 ira joined #gluster
20:18 deniszh joined #gluster
20:35 theron joined #gluster
20:44 ira joined #gluster
20:48 rwheeler joined #gluster
20:51 Creeture I probably don't want to know the answer to this, but has anybody GlusterFS related contacted VMware about getting into their IOVP Program and developing a native GlusterFS storage driver?
20:51 JoeJulian Creeture: I haven't heard of anything. Any interest in taking that role?
20:52 Creeture JoeJulian: I don't have the technical ability to do the work.
20:52 Creeture Unless I can write it in python.
20:52 Creeture :)
20:54 JoeJulian AFICT, VMWare would have to do the writing. You would only need to act as a liaison.
20:54 Creeture You think? I read it as more of a 3rd party driver developer type of program.
20:56 Creeture "The I/O Vendor Program (IOVP) is open to qualified partners interested in building and certifying devices and drivers for ESX, and is primarily targeted at IHVs (Independent Hardware Vendors). However, non-IHVs may also find it useful for their needs."
21:01 JoeJulian If I were at all interested in that bloated slow closed-source high cost hypervisor, I would look at what it would take and file bug reports asking for the features needed. I'm not even sure that licensing would be compatible. Maybe find that out as well.
21:02 hagarth joined #gluster
21:02 chirino joined #gluster
21:03 Creeture I'll see what I can find. I have a ton of customers using ESXi who don't believe that OpenStack or any of the other competitors have the desired feature set yet.
21:04 Creeture Maybe I can get one to fund it.
21:04 JoeJulian Yeah, I 've heard that story. I'll know more by the end of the year.
21:05 JoeJulian Unless I get fed up with it and find someplace else.
21:05 Creeture I just signed up on their Partner Onboarding Form. We'll see what happens.
21:05 JoeJulian If you need hooked up with anyone in particular on the Gluster end of this, I'll be happy to help facilitate that.
21:06 Creeture Cool. I'll check back in with you when I find something worthwhile.
21:06 deniszh joined #gluster
21:08 prg3 joined #gluster
21:12 prg3 joined #gluster
21:13 wushudoin joined #gluster
21:24 prg3 joined #gluster
21:28 B21956 left #gluster
21:42 B21956 joined #gluster
21:50 chirino joined #gluster
21:52 T0aD joined #gluster
21:54 misc JustinClift: ok so I found a idea, we should gluster, gluster on the music of https://www.youtube.com/watch?v=BQAKRw6mToA
21:55 JoeJulian You would need Eco and JMW.
21:58 misc first, we would need a lot of vodka
21:58 misc well no
21:58 misc I already have that, we would need a camera
21:58 JoeJulian Every cell phone comes with a nearly studio quality camera these days.
21:59 misc ok so next time we organize a meetup or something
22:00 JoeJulian Don't warn johnmark. It's way more fun to hook him in to things at the last minute. It's his favorite way of doing things.
22:11 misc will keep that in mind :)
22:15 deniszh joined #gluster
22:17 lyang0 joined #gluster
22:21 glusterbot News from newglusterbugs: [Bug 1181669] File replicas differ in content even as heal info lists 0 entries in replica 2 setup <https://bugzilla.redhat.com/show_bug.cgi?id=1181669>
22:21 glusterbot News from resolvedglusterbugs: [Bug 1196898] nfs: crash with nfs process <https://bugzilla.redhat.com/show_bug.cgi?id=1196898>
22:21 glusterbot News from resolvedglusterbugs: [Bug 1176311] glfs_h_creat() leaks file descriptors <https://bugzilla.redhat.com/show_bug.cgi?id=1176311>
22:44 badone_ joined #gluster
22:55 chirino joined #gluster
22:58 gildub joined #gluster
23:07 pelox joined #gluster
23:46 gildub joined #gluster
23:50 deniszh1 joined #gluster
23:53 pelox joined #gluster
23:58 bala joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary