Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-02-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 theron joined #gluster
00:01 haomaiwa_ joined #gluster
00:06 pjrebollo joined #gluster
00:10 EinstCrazy joined #gluster
00:36 theron joined #gluster
00:44 dlambrig_ joined #gluster
00:45 dlambrig_ left #gluster
00:48 nathwill joined #gluster
00:56 chirino joined #gluster
01:01 haomaiwa_ joined #gluster
01:07 hagarth joined #gluster
01:07 theron joined #gluster
01:25 nangthang joined #gluster
01:27 plarsen joined #gluster
01:35 auzty joined #gluster
01:36 auzty joined #gluster
02:16 haomaiwa_ joined #gluster
02:26 theron joined #gluster
02:27 Lee1092 joined #gluster
02:27 dlambrig_ joined #gluster
02:31 harish joined #gluster
02:35 jhyland joined #gluster
02:36 jhyland joined #gluster
02:41 kenhui joined #gluster
02:41 harish joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:01 haomaiwa_ joined #gluster
03:13 shaunm joined #gluster
03:21 haomaiwa_ joined #gluster
03:27 rocky68 joined #gluster
03:34 rocky68 left #gluster
03:38 jhyland joined #gluster
03:38 theron joined #gluster
03:46 nathwill joined #gluster
03:47 nehar joined #gluster
03:51 nishanth joined #gluster
03:52 sebamontini joined #gluster
03:56 kdhananjay joined #gluster
03:56 overclk joined #gluster
04:02 ppai joined #gluster
04:05 sakshi joined #gluster
04:05 atinm joined #gluster
04:08 shubhendu joined #gluster
04:12 nbalacha joined #gluster
04:12 itisravi joined #gluster
04:17 kanagaraj joined #gluster
04:17 RameshN joined #gluster
04:22 sathees joined #gluster
04:24 PaulCuzner joined #gluster
04:30 sebamontini anybody awake?
04:30 sebamontini i'm trying to remove 2 bricks from a replica-distribute ( 2 x 2 =4 )
04:31 sebamontini it works ok, but when i remove it, i justs lets me without the data from the old bricks i'm removing
04:31 sebamontini today i tested this and when i remove de the bricks, the data from those bricks was moved/copied to the remaining bricks
04:32 nbalacha sebamontini, hi
04:32 sebamontini hi nbalacha
04:32 nbalacha what issues are you seeing with the remove brick?
04:32 sebamontini i have 2 nodes, with 2 bricks each
04:33 sebamontini the data is replicated-distributed
04:33 sebamontini 1 brick 50gb and 1 300gb (in each server)
04:33 gowtham joined #gluster
04:33 sebamontini the data is about 55gb, so it was balanced between the 2 bricks
04:34 sebamontini now i'm trying to remove the 50gb bricks from both servers to decomision those old disks
04:35 sebamontini but when i do the remove-brick command, it just says "success" but the remaining volume only has the data from the brick2 (300gb) and nothing of data that was stored on the brick1 (50gb)
04:35 sebamontini did i explained myself correctly nbalacha ?
04:36 nbalacha sebamontini, yes, you did.
04:36 nbalacha the remove brick should trigger a rebalance operation which will move the bricks
04:36 nbalacha sorry - move the data
04:36 sebamontini exactly
04:36 nbalacha can you check the remove brick status?
04:37 sebamontini when i did this same operation in dev-environment it worked that way
04:37 sebamontini no, after it says succes when i look for status it says theres is only one brick and nothing can be removed
04:38 sebamontini [root@gluster01-secundario ~]# gluster volume remove-brick gv0 gluster01-primario:/mnt/brick1/gv0 gluster01-secundario:/mnt/brick1/gv0 status
04:38 sebamontini volume remove-brick status: failed: Volume gv0 is not a distribute volume or contains only 1 brick.
04:38 nbalacha try gluster volume remove-brick gv0 status
04:38 sebamontini nope
04:39 sebamontini [root@gluster01-secundario glusterfs]# gluster volume remove-brick gv0 status
04:39 sebamontini Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>
04:39 sebamontini is asking for the brick names also
04:40 nbalacha hmm
04:40 sebamontini after all this, i added back the bricks to the volume, and everything is ok
04:40 sakshi sebamontini, does gluster volume info still show the bricks that were removed?
04:40 sebamontini i'm doing a rebalance just to be on the safe side
04:40 nbalacha sebamontini, which version of gluster are you running?
04:40 sebamontini sakshi nope, after the remove-brick the volume info shows that there are only 2 bricks
04:40 DV joined #gluster
04:40 sebamontini 3.7.8 nbalacha
04:41 sakshi sebamontini, how did you remove the bricks? Did you use 'force' by any chance?
04:42 sebamontini yes, force
04:43 sebamontini as it says in the doc: https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/#shrinking-volumes
04:43 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.org)
04:43 sebamontini sakshi do you think i should use start instead of force?
04:44 sakshi sebamontini, yes
04:44 sebamontini in the dev environment i tested today i used force and worked as the documentations said
04:44 sebamontini ok, i'll stop the rebalance i forced once i added back the bricks
04:44 sakshi sebamontini, you see force command will not tirgger the rebalance which will migrate the files, it will not remove the brick and its data
04:45 sebamontini wow! so the doc is wrong?!
04:45 nbalacha sebamontini, can you paste the exact command you used?
04:45 sakshi sebamontini, if you use remove-brick start, that will move all  your data from the decommissioned bricks to the others.
04:46 sebamontini sakshi that sounds exactly what i want to do :)
04:46 sebamontini nbalacha http://paste.nubity.com/c572b002a05f70cb.avrasm
04:46 glusterbot Title: Paste | Nubity (at paste.nubity.com)
04:46 sebamontini there are an info, then the remove command, and then a info again
04:47 karthikfff joined #gluster
04:48 sebamontini sakshi will try now with the start instead of force
04:49 sakshi sebamontini, if you read further there is a section 'Replace faulty brick' which mentions the steps which will help you to move all your data for reference
04:49 sebamontini yes, but in that case the new brick i'm using to replace the old one need to be clean, with no data
04:49 sakshi sebamontini, ohh sorry please ignore my last comment
04:50 sebamontini no problem
04:50 sakshi sebamontini, yes, perform remove-brick start to triggger migration of data
04:50 sebamontini it looks like it's working ...
04:51 sakshi sebamontini, you can check the status of the removed-bricks to see when the rebalance gets over. Once it is completed, you must perform commit to finalize remove-brick
04:51 sebamontini seems to be working like a charm :) thanks a lot sakshi
04:52 sakshi sebamontini, :)
04:52 sebamontini how do i perform the commit ? remove-brick vol brick1 brick2 commit ?
04:53 sakshi sebamontini, yes
04:53 sebamontini great, right now i'm seeing the status of data being moved in the node i run the remove-brick command
04:53 sakshi sebamontini, you must do it after the moving of all data is completed
04:53 sebamontini i guess once it finnish, it will do the same on the oher node?
04:54 jiffin joined #gluster
04:56 sakshi sebamontini, you must do commit only after migration of data from all the decommissioned bricks is completed
04:58 nehar joined #gluster
04:59 skoduri joined #gluster
04:59 sebamontini sure, i'll check the status command until it says is finished
05:00 sebamontini and also a simple df to check if all data has been moved
05:00 sebamontini thanks a lot sakshi !!
05:00 sakshi sebamontini, sure:)
05:00 sebamontini if u r ever in buenos aires (argentina) i'll buy you a beer for sure!
05:01 karnan joined #gluster
05:04 haomaiwa_ joined #gluster
05:05 merp_ joined #gluster
05:05 pppp joined #gluster
05:05 siel joined #gluster
05:07 rafi joined #gluster
05:09 Apeksha joined #gluster
05:14 ndarshan joined #gluster
05:15 calavera joined #gluster
05:22 javi404 joined #gluster
05:26 atalur joined #gluster
05:27 python_lover joined #gluster
05:35 Manikandan joined #gluster
05:38 sac joined #gluster
05:44 atalur joined #gluster
05:47 arcolife joined #gluster
05:48 hgowtham joined #gluster
05:50 gem joined #gluster
05:50 Bhaskarakiran joined #gluster
05:51 itisravi joined #gluster
05:59 itisravi joined #gluster
06:00 spalai joined #gluster
06:01 haomaiwa_ joined #gluster
06:05 hchiramm joined #gluster
06:06 itisravi joined #gluster
06:06 armyriad joined #gluster
06:08 ramteid joined #gluster
06:09 rafi joined #gluster
06:09 ashiq joined #gluster
06:09 skoduri joined #gluster
06:09 ekuric joined #gluster
06:10 kdhananjay joined #gluster
06:15 vmallika joined #gluster
06:20 javi404 joined #gluster
06:22 kotreshhr joined #gluster
06:23 nangthang joined #gluster
06:24 aravindavk joined #gluster
06:29 ramky joined #gluster
06:30 gem joined #gluster
06:34 mhulsman joined #gluster
06:40 python_lover joined #gluster
06:46 Saravanakmr joined #gluster
06:51 javi404 joined #gluster
06:53 bhuddah joined #gluster
06:55 ndarshan joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 Wizek joined #gluster
07:04 jwd joined #gluster
07:13 skoduri_ joined #gluster
07:18 jtux joined #gluster
07:19 spalai joined #gluster
07:21 spalai1 joined #gluster
07:23 hchiramm joined #gluster
07:29 owlbot joined #gluster
07:30 harish joined #gluster
07:33 owlbot joined #gluster
07:35 [Enrico] joined #gluster
07:35 ggarg joined #gluster
07:36 jeek joined #gluster
07:37 owlbot joined #gluster
07:41 owlbot joined #gluster
07:45 owlbot joined #gluster
07:49 owlbot joined #gluster
07:53 haomaiwa_ joined #gluster
07:53 owlbot joined #gluster
07:54 jiffin joined #gluster
07:57 owlbot joined #gluster
08:01 owlbot joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 owlbot joined #gluster
08:06 jri joined #gluster
08:07 Ulrar So what is server-quorum-type ? Just set it to "server" according to instructions, but I can't find any good explanation on what it does
08:07 Ulrar With a value of server, quorum will continue to work like usual, right ?
08:08 wnlx joined #gluster
08:09 owlbot joined #gluster
08:10 post-factum >50%, i assume
08:11 mhulsman joined #gluster
08:11 deniszh joined #gluster
08:12 sathees Ulrar, when you enable server-quorum on a volume, there should be alteast >50% of nodes in the cluster should be up and running
08:13 owlbot joined #gluster
08:13 sathees Ulrar, else your brick processes will be killed
08:13 sathees Ulrar, refer to docs - http://gluster.readthedocs.org/en/release-3.7.0/Features/server-quorum/
08:13 glusterbot Title: Server Quorum - Gluster Docs (at gluster.readthedocs.org)
08:13 PaulCuzner left #gluster
08:15 Ulrar Right
08:15 vmallika joined #gluster
08:15 Ulrar Well that's fine then, proxmox is already doing that, so glusterfs can do it aswell
08:15 Ulrar Thanks
08:17 owlbot joined #gluster
08:21 owlbot joined #gluster
08:22 harish joined #gluster
08:25 owlbot joined #gluster
08:32 fsimonce joined #gluster
08:33 RayTrace_ joined #gluster
08:34 kdhananjay joined #gluster
08:36 sabansal_ joined #gluster
08:39 john51 joined #gluster
08:40 kshlm joined #gluster
08:40 atalur joined #gluster
08:44 Slashman joined #gluster
08:53 ekuric left #gluster
08:54 ekuric joined #gluster
08:55 MACscr|lappy joined #gluster
09:01 haomaiwa_ joined #gluster
09:06 tswartz joined #gluster
09:09 DV__ joined #gluster
09:12 [Enrico] joined #gluster
09:13 muneerse joined #gluster
09:14 ctria joined #gluster
09:15 bfm joined #gluster
09:20 ovaistariq joined #gluster
09:22 s-hell Still having trouble with my georeplication. One Node gets faulty after copying multiple files into folder. https://paste.pcspinnt.de/view/raw/061671ab
09:24 ivan_rossi joined #gluster
09:31 mhulsman joined #gluster
09:32 jiffin1 joined #gluster
09:37 haomaiw__ joined #gluster
09:38 [Enrico] joined #gluster
09:42 kdhananjay joined #gluster
09:43 [diablo] joined #gluster
09:47 Upgreydd joined #gluster
09:49 Upgreydd Hi all. I have a question. I was using Proxmox and DRBD with two nodes, but after power failure i lost my VM's. Hopefully that was test VM's and nothing bad happened. Is GlusterFS good alternative for two node HA server?
09:50 python_lover joined #gluster
09:51 Upgreydd and how about ac failure? What cache'ing should i use for RAID5 array with XFS in it (for GlusterFS)? Writeback, Writhethrough? I have PERC 6E Physical Controller
09:54 post-factum Upgreydd: https://www.gluster.org/pipermail/gluster-users/2015-December/024568.html
09:54 glusterbot Title: [Gluster-users] Sharding - what next? (at www.gluster.org)
09:55 post-factum one may store vms on glusterfs volume, but consider reading that email
09:58 glafouille joined #gluster
09:59 mhulsman joined #gluster
10:01 7GHAACNLP joined #gluster
10:02 abhi_ joined #gluster
10:03 abhi_ JOIN
10:04 abhi_ ggarg
10:04 abhi_ Hi @ggarg
10:05 ggarg abhi_, hi
10:05 abhi_ Hello
10:05 glusterbot abhi_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:05 ggarg abhi_, could you paste the output what atinm asked
10:06 ggarg abhi_, don't mind about glusterbot
10:07 abhi_ ls -lrt glusterd/peers/ total 4 -rw-rw-r-- 1 abhishek abhishek 72 Feb 19 15:31 b88c74b9-457d-4864-9fe6-403f6934d7d1
10:07 glusterbot abhi_: -rw-rw-r's karma is now -3
10:08 ggarg abhi_, could you paste output from both nodes
10:08 atinm abhi_, it would be good if you can use fpaste.org or any such  pastebin tools to paste all the contents and then share the link
10:08 abhi_ gluster peer status from 1 st board
10:08 abhi_ # gluster peer status  Number of Peers: 1  Hostname: 10.32.1.144 Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1 State: Peer in Cluster (Connected)
10:08 ggarg abhi_, yeah
10:09 ggarg abhi_, use fpaste and give me url
10:09 abhi_ gluster peer status from 2nd board which we are removing # gluster peer status  Number of Peers: 1  Hostname: 10.32.0.48 Uuid: e7c4494e-aa04-4909-81c9-27a462f6f9e7 State: Peer in Cluster (Connected)
10:11 ggarg abhi_, and content of #cat /var/lib/glusterd/peers/*  from both node ?
10:11 ggarg abhi_, what board you are using
10:12 post-factum stop shooting -rw-rw-r's karma, guys!
10:12 post-factum -rw-rw-r++
10:12 glusterbot post-factum: -rw-rw-r's karma is now -2
10:13 jiffin1 joined #gluster
10:13 abhi_ http://fpaste.org/328313/56308792/
10:13 glusterbot Title: #328313 Fedora Project Pastebin (at fpaste.org)
10:13 abhi_ password is abc123
10:14 atinm abhi_, output of /var/lib/glusterd/peers in from which node?
10:14 atinm abhi_, can you provide the same for other node as well?
10:14 ggarg abhi_, from both node. you have pasted from 1st node
10:15 abhi_ I have pasted for both of the board
10:16 abhi_ do not have the logs for the second board of peers directory
10:16 atinm abhi_, no,
10:16 atinm cat glusterd/peers/* output of the other node?
10:16 abhi_ right now do not have the log of second board
10:17 ggarg abhi_, its not a log. its configuration data. you dont have data of /var/lib/glusterd/peers of 2nd node ?
10:19 abhi_ yes I'll provide them but right now i do not have
10:19 ggarg abhi_, what board you are using. and are you re-installing os of 2nd board
10:21 abhi_ Actually board is for the corporate purpose which have the PowerPc arch and both of the board have linux on that
10:21 ggarg abhi_, i think after rebooting of your 2nd board content of /var/lib/glusterd/* persist. it will remove only after re-installing os on that
10:22 ggarg abhi_, did you perform any re-installing
10:22 hackman joined #gluster
10:23 abhi_ what do you mean by the re-insatlling of the OS
10:23 ggarg abhi_, on 2nd board
10:23 ggarg abhi_, another kernel porting
10:23 abhi_ we are reloading the kernel on the second board
10:23 abhi_ and even when our board comes up we are starting the glusterd as well
10:24 ggarg abhi_, reloading the kernel should not remove /var/lib/glusterd/* content on 2nd board
10:24 abhi_ and stop the glusterd before removing the board
10:25 abhi_ so it is better we can remove this directory manually
10:27 ggarg abhi_, no
10:28 ggarg abhi_, you should not remove these directory. i am just asking after reloading kernel did your /var/lib/glusterd/* content removed?
10:29 python_lover joined #gluster
10:29 abhi_ no
10:30 itisravi joined #gluster
10:30 ggarg abhi_, ok
10:31 abhi_ content of glusted/ * are existing there even after reboot
10:31 ggarg abhi_, yeah. it should
10:31 abhi_ give me 2 min I got the log of second board as well
10:32 ggarg abhi_, cool , sure
10:35 owlbot joined #gluster
10:39 Saravanakmr joined #gluster
10:41 skoduri joined #gluster
10:42 hchiramm_ joined #gluster
10:47 haomaiwa_ joined #gluster
10:53 RayTrace_ joined #gluster
10:55 kdhananjay Upgreydd: post-factum is right about sharding. You might want to try that specifically if you want to use gluster for VMs. As for your question. it is normally recommended that you use a replica-3 volume with client-quorum to store vms to guard against split-brains. But fortunately with a new feature called arbiter, you don't have to spend 3x the cost. itisravi, could you explain Upgreydd the details of arbiter?
10:56 The_Ball Any tips on how I can trace down what is causing IO blockage and this in the log: W [fuse-bridge.c:2294:fuse_writev_cbk] 0-glusterfs-fuse: 111466: WRITE => -1 (Invalid argument)
10:56 Upgreydd kdhananjay: wait wait wait. I have two nodes only :/ what arbiter is?
10:57 owlbot` joined #gluster
10:57 kdhananjay Upgreydd: passing your question over to itisravi who developed arbiter feature. itisravi, could you explain the details?
10:59 itisravi Upgreydd: haven't looked at the earlier chat logs, but have you tried arbiter volumes?
10:59 Upgreydd kdhananjay: thanks. itisravi please advise me ;)
11:00 Upgreydd itisravi: I'm at beginning. I have two machines with proxmox and I tried DRBD9, all VM's goes destroyed after ac failure :/
11:01 Upgreydd itisravi: Meybe GlusterFS is a good alternative for DRBD? I don't know :/
11:01 owlbot` joined #gluster
11:02 Upgreydd I'm looking 3 days for RAID Controller cache configuration, but anyone can advice me how i suppose to config caching with DRBD - i think cache was the problem with DRBD. How about GlusterFS? I have 2 pcs Dell R900 + 2 PowerVault
11:03 Upgreydd itisravi: I didn't tried at all
11:03 itisravi Upgreydd: glusterfs replica a good option for hosting VM images.  We generally recommend replica 3 like kdhananjay said, but there is this new feature called arbiter volumes
11:03 Upgreydd itisravi: OK, tell me about this future please :>
11:04 itisravi Upgreydd: Arbiter volumes are a subset of replica 3 volumes. The 3rd brick does not contain data, but just stores the metadata of files.
11:05 itisravi It has checks to prevent files from ending in split brain
11:05 itisravi https://github.com/gluster/glusterfs-specs/blob/master/done/Features/afr-arbiter-volumes.md has some information on it
11:05 glusterbot Title: glusterfs-specs/afr-arbiter-volumes.md at master · gluster/glusterfs-specs · GitHub (at github.com)
11:05 Upgreydd itisravi: something like RAID5
11:06 abhi_ hi ggarg
11:06 itisravi Upgreydd: kind of but there is no parity. There are 2 copies + the 3rd brick that maintains some metadata
11:06 abhi_ sorry I got some other problem on the board
11:06 Upgreydd itisravi: all 3 volumes on each node?
11:07 abhi_ could you please tell me what happened it we run two different version of the glusterfs on both of the board
11:07 itisravi Upgreydd: One volume consisting of 3 'bricks'
11:07 Upgreydd itisravi: OK. I'm new in Gluster. I didn't tried it at all
11:08 owlbot joined #gluster
11:08 itisravi Upgreydd: hmm, you could get yourself familiarized with the basic gluster terminology
11:09 itisravi http://gluster.readthedocs.org/en/latest/ is a good starting point.
11:09 glusterbot Title: Gluster Docs (at gluster.readthedocs.org)
11:09 ggarg abhi_, hi
11:09 itisravi Try to figure out how to create a replicated volume, how to acess the volume etc.
11:09 Upgreydd itisravi:  are you working with ProxMox VE?
11:09 abhi_ could you please tell me what happened if we run two different version of the glusterfs on both of the board
11:10 ggarg abhi_, its depend what command you are executing
11:10 abhi_ actually now I am getting mounting faild on the second board
11:10 poornimag joined #gluster
11:10 ggarg abhi_, so if you run different version of GlusterFS then cluster version will be minimum of them
11:10 abhi_ and glusterd have not started
11:10 itisravi Upgreydd: no, I'm a glusterfs developer
11:11 ggarg abhi_, what mount log saying
11:11 ggarg abhi_, and bottom of glusterd logs
11:11 ggarg abhi_, i mean error message of both logs
11:11 Upgreydd itisravi: I see. Thank you for advices. I'll try with GlusterFS and arbiter
11:11 abhi_ access denied by the <ip of the 1st board> : on the <mount point>
11:12 abhi_ and also not creating any log file except the cli.log
11:12 MACscr|lappy joined #gluster
11:12 ggarg abhi_, can you do "#iptables -F" on both board
11:12 ggarg abhi_, and try it out
11:12 itisravi Upgreydd: great! Feel free to ask questions on gluster-users@gluster.org if you need help
11:13 ggarg abhi_, can you paste configuration data of your 2nd board
11:13 ggarg abhi_, first i would like to know /var/lib/glusterd/* content
11:14 abhi_ nothing is there in output
11:14 ggarg abhi_, this means your /var/lib/glusterd have removed
11:14 ggarg abhi_, i mean data
11:14 Upgreydd itisravi: one more question. How about ac failure? For example i turn off all nodes from AC will I get my VM's back? I was reading that XFS is prefered for GlusterFS. Is that true?
11:14 ggarg abhi_, configuration data
11:14 abhi_ all data is there
11:15 abhi_ in glusterd/
11:15 ggarg abhi_, can you provide me that data
11:15 ggarg abhi_, yes glusterd/ data i am asking about
11:15 abhi_ yes
11:15 abhi_ ok
11:16 ggarg abhi_, can you again paste it in fpaste
11:17 ggarg abhi_, i would like to see  #ls -lrt /var/lib/glusterd/peers  and #cat /var/lib/glusterd/peers*
11:17 itisravi Upgreydd: I guess that depends on the crash consistency of the underlying filesystem too. For hosting VMs, we have a 'virt-profile' setting in ovrit (if you're using that) that optimises glusterfs caching translators for VM use cases.
11:18 itisravi Upgreydd: and yes, XFS is the recommended filesystem
11:18 itisravi \/s/ovrit/ovirt
11:19 Upgreydd itisravi: can you look at this and tell me more about this config: http://blog.cyberlynx.eu/2014/proxmox-ve-3-3-2-node-cluster-with-glusterfs/ is it good?
11:19 glusterbot Title: Proxmox VE 3.3 2-node cluster with GlusterFS | CyberLynx (at blog.cyberlynx.eu)
11:19 Upgreydd itisravi: I mean only gluster section\
11:21 ovaistariq joined #gluster
11:21 itisravi Upgreydd: it seems to be using replica 2. It is recommended to use arbiter or replica-3 instead.
11:21 Upgreydd itisravi: OK Thank you once again
11:22 Saravanakmr joined #gluster
11:22 itisravi Upgreydd: you're welcome
11:23 overclk joined #gluster
11:23 atinm joined #gluster
11:24 ggarg abhi_, ^ ^
11:24 ggarg abhi_, could you paste output of both
11:25 Peppard joined #gluster
11:26 abhi_ http://fpaste.org/328345/63131811/
11:26 glusterbot Title: #328345 Fedora Project Pastebin (at fpaste.org)
11:26 abhi_ abc123
11:28 abhi_ are able to see?
11:28 ggarg abhi_, this information indicate you are also using 3rd board
11:29 abhi_ but we are not using the 3rd board
11:29 abhi_ do not know what is going on
11:29 ggarg abhi_, yeah something going wrong here
11:30 abhi_ how can check it
11:30 abhi_ ?
11:30 abhi_ any idea?
11:31 haomaiwa_ joined #gluster
11:32 atinm abhi_, you'd need to detail out what steps did you perform to form the cluster and what all operations you have done
11:35 itisravi left #gluster
11:35 ggarg abhi_, previously your fpaste data http://fpaste.org/328313/56308792/  and your updated data http://fpaste.org/328345/63131811/  is not matching. its confusing. could you tell us what operation you have performed
11:35 glusterbot Title: #328313 Fedora Project Pastebin (at fpaste.org)
11:36 ggarg abhi_, because i can see different different first board entry in both fpaste url
11:37 ggarg atinm, abhi_ has quit, might be he will come back later. but both output really ambiguous
11:38 ggarg atinm, i mean first board data in both url  http://fpaste.org/328313/56308792/  and http://fpaste.org/328345/63131811/
11:38 glusterbot Title: #328313 Fedora Project Pastebin (at fpaste.org)
11:45 owlbot joined #gluster
11:45 abhi_ joined #gluster
11:46 abhi_ @ggarg: but how can debug it
11:50 owlbot joined #gluster
11:53 pepepepe joined #gluster
11:54 atinm REMINDER: Gluster community weekly meeting to begin in ~5 minutes
11:57 pepepepe left #gluster
11:57 haomaiwa_ joined #gluster
11:58 shubhendu joined #gluster
12:02 mhulsman joined #gluster
12:05 owlbot joined #gluster
12:05 nottc joined #gluster
12:07 jdarcy joined #gluster
12:14 gem joined #gluster
12:14 gem_ joined #gluster
12:15 ppai joined #gluster
12:16 owlbot joined #gluster
12:22 nehar joined #gluster
12:31 johnmilton joined #gluster
12:42 shubhendu joined #gluster
12:42 kkeithley joined #gluster
12:42 ppai joined #gluster
12:46 sebamontini joined #gluster
12:56 kanagaraj joined #gluster
13:00 poornimag joined #gluster
13:03 telmich joined #gluster
13:08 chirino joined #gluster
13:13 haomaiwang joined #gluster
13:16 unclemarc joined #gluster
13:17 nehar joined #gluster
13:21 Manikandan joined #gluster
13:22 ovaistariq joined #gluster
13:23 spalai joined #gluster
13:33 sebamontini joined #gluster
13:33 theron joined #gluster
13:38 mhulsman joined #gluster
13:40 Ulrar So I enabled sharding on my volume, and cp-ed my files to shard them. I do have plenty of files in .shard, but is there a way to know which shard correspond to which file ?
13:40 Ulrar Feel like I forgot to move one file, I'd need to see which file has no shards
13:44 gessi joined #gluster
13:45 dlambrig_ joined #gluster
13:47 post-factum mmm should be in xattrs, i guess
13:47 EinstCrazy joined #gluster
13:48 Quidz joined #gluster
13:49 post-factum could you please execute getfattr -d -e hex /somebrick/some_shard_piece_file
13:51 Apeksha joined #gluster
13:52 post-factum correction: getfattr -m . -d -e hex
13:54 harish joined #gluster
13:55 spalai left #gluster
14:00 theron joined #gluster
14:00 jiffin1 joined #gluster
14:01 haomaiwa_ joined #gluster
14:01 plarsen joined #gluster
14:04 dlambrig_ joined #gluster
14:15 julim joined #gluster
14:15 dlambrig_ joined #gluster
14:17 bitchecker joined #gluster
14:18 bitchecker hi @ all
14:18 bitchecker can anyone help me with a problem on glusterfs client?
14:18 bitchecker i can't mount a volume that is replicated on two glusterfs server
14:18 bitchecker that can mount locally the volume without any problem
14:19 kdhananjay joined #gluster
14:19 post-factum bitchecker: sounds like network issue. plz check your firewall first
14:19 bitchecker firewalling is disabled
14:19 bitchecker server can mount also if i mount with localhost
14:19 bitchecker that if i mount with other server ip/name
14:20 Apeksha_ joined #gluster
14:21 theron joined #gluster
14:21 post-factum ok, then "gluster peer status", "gluster volume info SOMEVOLUME" and "gluster volume status SOMEVOLUME" plz
14:24 jiffin bitchecker: trying flushing iptables
14:25 jiffin iptables -F
14:25 jiffin on both server and client
14:25 bitchecker post-factum, http://pastebin.com/5k0aw3MZ
14:25 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:25 bitchecker jiffin, iptables are with no rules
14:26 bitchecker jiffin, http://fpaste.org/328419/63239971/
14:26 glusterbot Title: #328419 Fedora Project Pastebin (at fpaste.org)
14:27 post-factum how do you mount the volume from client?
14:28 chirino joined #gluster
14:29 aravindavk joined #gluster
14:29 bitchecker mount.glusterfs glusterfs01:/volume /mnt/
14:30 post-factum and the error is?..
14:31 post-factum /var/log/glusterfs/mnt.log
14:32 post-factum anyway, you refer to glusterfs server as gluster01 in volume info, and glusterfs01 on the client
14:32 post-factum inconsistency I some see
14:32 shaunm joined #gluster
14:34 bitchecker mmh
14:34 bitchecker so also this can be an issue?
14:35 post-factum if they are resolved to different IP addresses, then yes
14:35 post-factum but we want to see logs to be sure
14:36 bitchecker post-factum, http://fpaste.org/328427/14563245/
14:36 glusterbot Title: #328427 Fedora Project Pastebin (at fpaste.org)
14:36 bitchecker post-factum, in logs, it resolve with ip
14:37 yawkat joined #gluster
14:38 post-factum you definitely have dns resolving issue
14:38 post-factum try to mount by ip first to make sure it works
14:38 post-factum then, dance with dns resolving
14:39 bitchecker with ip doesn't works also
14:39 bitchecker i've changed dns resolution and mount worked fine!
14:39 skylar joined #gluster
14:40 post-factum ok
14:40 bitchecker you think that is normaly this behaviour
14:40 bitchecker ?
14:40 bitchecker *normal
14:40 post-factum make sure to use consistent server dns names across your cluster
14:41 theron joined #gluster
14:42 bitchecker post-factum, at the moment i've only host file but i'll replace this brutal configuration with a dns
14:42 bitchecker thanks a lot post-factum! :)
14:43 hamiller joined #gluster
14:47 Gaurav_ joined #gluster
14:53 dlambrig_ joined #gluster
14:56 The_Ball joined #gluster
14:57 nthomas joined #gluster
14:58 edong23 joined #gluster
14:59 ghenry joined #gluster
14:59 ghenry joined #gluster
15:01 sebamontini joined #gluster
15:01 haomaiwa_ joined #gluster
15:16 Jules- joined #gluster
15:22 sebamontini joined #gluster
15:22 ovaistariq joined #gluster
15:28 bitchecker post-factum, after a reboot also gluster servers are unable to mount volume!
15:29 bitchecker http://fpaste.org/328485/56327759/
15:29 glusterbot Title: #328485 Fedora Project Pastebin (at fpaste.org)
15:37 post-factum i see the same dns resolution error
15:38 post-factum so, you should fix your dns first
15:39 rafi joined #gluster
15:41 farhorizon joined #gluster
15:44 bitpushr joined #gluster
15:56 bitchecker post-factum, no dns problem
15:56 Bardack rires
15:56 bitchecker was a problem on cache
15:57 bitchecker after reboot maximum cache was smaller then volume cache configuration
15:57 bitchecker :-/
15:57 johnmilton joined #gluster
15:58 jotun joined #gluster
16:01 haomaiwa_ joined #gluster
16:16 MACscr|lappy joined #gluster
16:17 theron joined #gluster
16:19 bowhunter joined #gluster
16:19 MACscr|lappy left #gluster
16:31 kanagaraj joined #gluster
16:35 theron joined #gluster
16:35 F2Knight joined #gluster
16:40 skoduri joined #gluster
16:41 bennyturns joined #gluster
16:43 theron joined #gluster
16:44 chirino joined #gluster
16:46 squizzi_ joined #gluster
16:48 Ulrar joined #gluster
16:50 jhyland joined #gluster
16:54 robb_nl joined #gluster
17:00 kanagaraj joined #gluster
17:01 haomaiwa_ joined #gluster
17:05 neofob joined #gluster
17:05 neofob left #gluster
17:07 jiffin joined #gluster
17:13 ashiq joined #gluster
17:13 ashiq_ joined #gluster
17:17 kassav joined #gluster
17:19 shubhendu joined #gluster
17:23 ovaistariq joined #gluster
17:25 calavera joined #gluster
17:25 dlambrig_ joined #gluster
17:32 DJClean joined #gluster
17:34 Manikandan joined #gluster
17:44 theron joined #gluster
17:47 merp_ joined #gluster
17:53 jhyland Hey guys. Im running into a weird issue where when I try to connect to a gv, it fails, and says look at the logs, and the logs say:
17:53 jhyland All subvolumes are down. Going offline until atleast one of them comes back up.
17:54 jhyland but when I go to the glusterfs server, and look at the volume status, it says:
17:54 jhyland volume start: gv0: failed: Volume gv0 already started
17:54 jhyland any idea?
17:54 jhyland @JoeJulian ? :-D
17:55 kanagaraj joined #gluster
17:55 jhyland when I stop/start the volume, it still persists
17:57 jhyland Just noticed it MAY be a dns issue, checking
17:57 jhyland Is there a way to have glusterfs use the FQDN as opposed to the a record?
17:58 jhyland I got the error:  0-gv0-client-1: DNS resolution failed on host gfs02-prod
17:58 jhyland which the fqdn lookup works fine
17:59 JoeJulian Are you using resolved?
18:01 jhyland ya, im seeing if its something with the VPC in AWS
18:01 jhyland but id still like to use the fqdn
18:01 skylar1 joined #gluster
18:01 haomaiwa_ joined #gluster
18:01 JoeJulian resolved has a bug where it doesn't use the domain search parameter. It'll be fixed in 229.
18:01 JoeJulian Add dns after resolve in nsswitch
18:02 JoeJulian To use the fqdn, probe your servers with their fqdn, then create your volume using fqdns.
18:04 jhyland ok, I thought I did, ill try it out, thanks!
18:06 jhyland @JoeJulian How can I get the list of probed servers? I tried to probe it with the FQDN, and it said it already exists
18:07 julim joined #gluster
18:09 ivan_rossi left #gluster
18:10 jhyland Sorry for the questions, havent done this for a minute
18:12 karnan joined #gluster
18:12 JoeJulian peer status should show that.
18:12 JoeJulian If it doesn't, I know the xml output does.
18:14 jhyland @JoeJulian http://pastebin.com/G86TqTZS thats what I mean
18:14 jhyland Oh I see what you mean about the peer status, ok
18:17 jhyland nvm, got it
18:28 sebamontini joined #gluster
18:33 Ulrar joined #gluster
18:38 Manikandan joined #gluster
18:40 rafi joined #gluster
18:42 RayTrace_ joined #gluster
18:55 theron joined #gluster
18:56 RayTrace_ joined #gluster
18:59 ovaistariq joined #gluster
19:01 92AAADOTV joined #gluster
19:02 skylar joined #gluster
19:14 kmmndr joined #gluster
19:17 ahino joined #gluster
19:20 jbrooks joined #gluster
19:20 robb_nl joined #gluster
19:22 RayTrace_ joined #gluster
19:23 chirino_m joined #gluster
19:30 ovaistariq joined #gluster
19:34 sebamontini joined #gluster
19:39 sebamontini joined #gluster
19:52 arcolife joined #gluster
20:01 haomaiwa_ joined #gluster
20:12 DV joined #gluster
20:21 F2Knight joined #gluster
20:31 ovaistariq joined #gluster
20:41 samsaffron___ joined #gluster
20:42 theron joined #gluster
20:57 calavera joined #gluster
21:00 F2Knight joined #gluster
21:01 haomaiwa_ joined #gluster
21:02 theron joined #gluster
21:13 sebamontini joined #gluster
21:14 deniszh joined #gluster
21:17 merp_ joined #gluster
21:20 deniszh joined #gluster
21:25 theron joined #gluster
21:52 deniszh joined #gluster
22:01 haomaiwa_ joined #gluster
22:07 DV joined #gluster
22:08 jlockwood joined #gluster
22:15 tessier joined #gluster
23:01 haomaiwa_ joined #gluster
23:03 nathwill joined #gluster
23:08 calavera joined #gluster
23:16 sebamontini joined #gluster
23:27 delhage joined #gluster
23:49 calavera joined #gluster
23:57 delhage joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary