Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 theron joined #gluster
00:08 theron joined #gluster
00:17 harish joined #gluster
00:23 plarsen joined #gluster
00:54 siel joined #gluster
01:03 bennyturns joined #gluster
01:21 jvandewege_ joined #gluster
01:27 m0zes joined #gluster
01:27 bala joined #gluster
01:32 aea joined #gluster
01:48 jvandewege_ joined #gluster
01:58 T3 joined #gluster
02:06 harish joined #gluster
02:32 glusterbot News from newglusterbugs: [Bug 1205970] can glusterfs be running on cygwin platform ? <https://bugzilla.redhat.com/show_bug.cgi?id=1205970>
02:40 lalatenduM joined #gluster
02:40 haomaiwa_ joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:53 bharata-rao joined #gluster
03:14 T3 joined #gluster
03:14 T3 joined #gluster
03:15 spandit joined #gluster
03:18 victori joined #gluster
03:35 soumya_ joined #gluster
03:40 bennyturns joined #gluster
03:42 itisravi joined #gluster
03:48 lalatenduM joined #gluster
03:55 nangthang joined #gluster
04:00 nishanth joined #gluster
04:01 T3 joined #gluster
04:07 rjoseph joined #gluster
04:07 itisravi_ joined #gluster
04:10 RameshN joined #gluster
04:13 shubhendu joined #gluster
04:16 kanagaraj joined #gluster
04:20 sripathi joined #gluster
04:20 sripathi left #gluster
04:23 kshlm joined #gluster
04:31 epaphus joined #gluster
04:34 RameshN joined #gluster
04:34 T3 joined #gluster
04:41 anoopcs joined #gluster
04:41 ndarshan joined #gluster
04:41 jiffin joined #gluster
04:47 schandra joined #gluster
04:47 schandra_ joined #gluster
04:48 rafi joined #gluster
04:54 anil joined #gluster
05:03 nbalacha joined #gluster
05:03 T3 joined #gluster
05:09 ashiq joined #gluster
05:10 Manikandan joined #gluster
05:12 vimal joined #gluster
05:17 T3 joined #gluster
05:19 karnan joined #gluster
05:21 bharata-rao joined #gluster
05:23 kumar joined #gluster
05:26 vimal joined #gluster
05:28 dusmant joined #gluster
05:33 glusterbot News from newglusterbugs: [Bug 1200268] Upcall: Support for lease_locks <https://bugzilla.redhat.com/show_bug.cgi?id=1200268>
05:33 T3 joined #gluster
05:36 atalur joined #gluster
05:36 kdhananjay joined #gluster
05:40 Lee- joined #gluster
05:45 aravindavk joined #gluster
05:47 hgowtham joined #gluster
05:49 ramteid joined #gluster
05:50 ndarshan joined #gluster
05:50 gem joined #gluster
05:51 T3 joined #gluster
05:52 shubhendu joined #gluster
05:52 Lee- joined #gluster
05:52 lalatenduM joined #gluster
06:02 Bhaskarakiran joined #gluster
06:06 raghu joined #gluster
06:07 spandit joined #gluster
06:11 Lee- joined #gluster
06:14 overclk joined #gluster
06:15 lyang0 joined #gluster
06:17 T3 joined #gluster
06:18 kripper joined #gluster
06:19 kripper "systemctl restart glusterd" doesn't start
06:19 kripper which are the relevant logs?
06:20 kripper /var/log/glusterfs/glustershd.log?
06:20 hagarth kripper: etc-glusterfs-glusterd.log
06:22 kripper [2015-03-26 06:20:35.878979] W [rdma.c:4221:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
06:22 kripper [2015-03-26 06:20:35.878989] E [rdma.c:4519:init] 0-rdma.management: Failed to initialize IB Device
06:22 kripper [2015-03-26 06:20:35.878992] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
06:22 kripper [2015-03-26 06:20:35.879031] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed
06:22 soumya_ joined #gluster
06:24 soumya joined #gluster
06:25 kripper sorry
06:25 kripper my fault
06:25 kripper network problem
06:25 kripper thanks
06:30 baoboa joined #gluster
06:37 shubhendu joined #gluster
06:38 ndarshan joined #gluster
06:41 T3 joined #gluster
07:17 haomaiwa_ joined #gluster
07:17 suliba joined #gluster
07:21 jtux joined #gluster
07:32 spandit joined #gluster
07:37 atalur joined #gluster
07:39 dusmant joined #gluster
07:46 schandra joined #gluster
07:47 deniszh joined #gluster
07:52 o5k__ joined #gluster
07:54 o5k_ joined #gluster
07:54 lyang0 joined #gluster
08:11 maveric_amitc_ joined #gluster
08:13 nangthang joined #gluster
08:19 spiekey joined #gluster
08:19 spiekey Hello!
08:19 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:21 spiekey i am running CentOS with Gluster 3.6.2 using a 3 Node Replica. Now onenode lost its brick (hard drive failed). So i replaced it, formatted it and mounted it back to its brick directory. Then i restarted my gluster service and „gluster volume heal RaidVolB info“ now shows: node07 => Status: Transport endpoint is not connected
08:21 spiekey any ideas?
08:27 Bhaskarakiran joined #gluster
08:30 fsimonce joined #gluster
08:35 Bhaskarakiran joined #gluster
08:35 atalur joined #gluster
08:48 [Enrico] joined #gluster
08:50 xrsanet joined #gluster
08:53 spandit joined #gluster
09:01 anoopcs joined #gluster
09:05 bharata_ joined #gluster
09:06 ctria joined #gluster
09:09 o5k__ joined #gluster
09:13 soumya joined #gluster
09:16 atalur joined #gluster
09:17 atalur joined #gluster
09:17 Guest92670 joined #gluster
09:22 spiekey anyone?
09:22 Slashman joined #gluster
09:27 ktosiek joined #gluster
09:32 soumya joined #gluster
09:33 glusterbot News from newglusterbugs: [Bug 1206065] [Backup]: 'Glusterfind create' creates a sub-directory at $glusterd_working_directory/session_dir even when the command fails <https://bugzilla.redhat.com/show_bug.cgi?id=1206065>
09:35 kovshenin joined #gluster
09:36 pookey left #gluster
09:38 stickyboy joined #gluster
09:44 ndarshan joined #gluster
09:48 shubhendu joined #gluster
09:54 kumar joined #gluster
09:59 ira joined #gluster
10:02 ira joined #gluster
10:03 karume joined #gluster
10:04 o5k_ joined #gluster
10:04 [Enrico] joined #gluster
10:05 karume hi there! just a question, wondering someone has a clue... I found that from the client side, the listed directories are duplicated (this only happens when listing directories, not files). Searching on google I've seen that more people found the same issue but no answers so far :(. Anyone has experienced the same?
10:06 karume also I don't have a clue in the logs
10:08 T3 joined #gluster
10:09 bala joined #gluster
10:18 ndarshan joined #gluster
10:18 shubhendu joined #gluster
10:29 maveric_amitc_ joined #gluster
10:29 o5k joined #gluster
10:32 hagarth karume: what backend file system are you using?
10:43 atalur joined #gluster
10:48 kkeithley1 joined #gluster
10:56 firemanxbr joined #gluster
10:57 karume @hagarth: I'm using xfs on Debian 7.8, but just found a workaround here: http://dev-random.net/how-to-configure-a-distributed-file-system-with-replication-using-glusterfs/ (<-- on the "Mounting redundant", the /etc/glusterfs/datastore.vol config file)
10:57 karume thanks anyways!
11:04 glusterbot News from newglusterbugs: [Bug 1206120] DHT: nfs.log getting filled with "I" logs <https://bugzilla.redhat.com/show_bug.cgi?id=1206120>
11:06 glusterbot News from resolvedglusterbugs: [Bug 1142402] DHT: nfs.log getting filled with "I" logs <https://bugzilla.redhat.com/show_bug.cgi?id=1142402>
11:06 spiekey joined #gluster
11:16 RameshN joined #gluster
11:18 Manikandan_ joined #gluster
11:18 al joined #gluster
11:18 atalur_ joined #gluster
11:18 schandra_ joined #gluster
11:19 osiekhan3 joined #gluster
11:19 Marqin_ joined #gluster
11:20 Dave2_ joined #gluster
11:21 sachin_ joined #gluster
11:21 tom][ joined #gluster
11:21 nixpanic_ joined #gluster
11:21 nixpanic_ joined #gluster
11:21 cyberbootje1 joined #gluster
11:22 pjschmit1 joined #gluster
11:23 ccha2 joined #gluster
11:27 sripathi joined #gluster
11:29 [Enrico] joined #gluster
11:29 harmw joined #gluster
11:29 scuttle|afk joined #gluster
11:29 scuttle|afk joined #gluster
11:29 ThatGraemeGuy joined #gluster
11:30 Rydekull_ joined #gluster
11:32 T0aD joined #gluster
11:32 ctria joined #gluster
11:33 92AAAWR99 joined #gluster
11:33 jiffin1 joined #gluster
11:33 stickyboy joined #gluster
11:33 xrsanet joined #gluster
11:33 CP|AFK joined #gluster
11:33 R0ok_ joined #gluster
11:33 marcoceppi_ joined #gluster
11:33 eclectic joined #gluster
11:33 jcastillo joined #gluster
11:33 masterzen joined #gluster
11:33 partner joined #gluster
11:33 xaeth_afk joined #gluster
11:33 tg2 joined #gluster
11:33 churnd joined #gluster
11:33 kaii joined #gluster
11:33 ndk joined #gluster
11:33 samppah joined #gluster
11:34 jiffin1 joined #gluster
11:34 92AAAWR99 joined #gluster
11:36 bharata_ joined #gluster
11:37 spiekey joined #gluster
11:37 atalur_ joined #gluster
11:38 Leildin hi, just wondering, what would the impact of "sudo gluster volume rebalance [VOLUME] start" on a distributed volume ?
11:38 nshaikh joined #gluster
11:39 sachin_ joined #gluster
11:40 jtux joined #gluster
11:46 kkeithley1 joined #gluster
11:47 soumya_ joined #gluster
11:47 raging-dwarf joined #gluster
11:48 ackjewt joined #gluster
11:57 T3 joined #gluster
12:01 corretico joined #gluster
12:01 anoopcs joined #gluster
12:01 julim joined #gluster
12:18 rjoseph joined #gluster
12:23 kshlm joined #gluster
12:30 kanagaraj joined #gluster
12:32 itisravi_ joined #gluster
12:34 glusterbot News from newglusterbugs: [Bug 1206134] glusterd :- after volume create command time out, deadlock has been observed among glusterd and all command keep failing with error "Another transaction is in progress" <https://bugzilla.redhat.com/show_bug.cgi?id=1206134>
12:42 rjoseph joined #gluster
12:43 bene2 joined #gluster
12:47 nangthang joined #gluster
12:49 wkf joined #gluster
12:56 LebedevRI joined #gluster
13:02 tuxle joined #gluster
13:02 tuxle hi all
13:03 tuxle I have my gluster working and online, now I wonder how I should add it to libvirt. Should I add it as a directory or as a glusterfs?
13:03 bennyturns joined #gluster
13:03 Slashman_ joined #gluster
13:04 bennyturns joined #gluster
13:12 T3 joined #gluster
13:21 virusuy joined #gluster
13:21 virusuy joined #gluster
13:29 plarsen joined #gluster
13:29 gnudna joined #gluster
13:29 gnudna left #gluster
13:30 gnudna joined #gluster
13:31 theron joined #gluster
13:34 dgandhi joined #gluster
13:38 georgeh-LT2 joined #gluster
13:38 thangnn_ joined #gluster
13:39 kasturi joined #gluster
13:47 victori joined #gluster
13:48 raging-dwarf tuxle: ??
13:49 raging-dwarf you want to replicate your vm's with gluster?
13:49 raging-dwarf better use LVM for VM's and use something like DRBD or sheepdog
14:02 cornusammonis joined #gluster
14:10 plarsen joined #gluster
14:11 anrao joined #gluster
14:18 jmarley joined #gluster
14:18 jmarley joined #gluster
14:18 Gill joined #gluster
14:27 soulhunter joined #gluster
14:27 nbalacha joined #gluster
14:33 AndreeeCZ joined #gluster
14:33 AndreeeCZ hi. I'm new to gluster; I have set up a 2 point replica system and it says its connected and running
14:34 AndreeeCZ how long does it take to replicate stuff?
14:34 adamaN left #gluster
14:34 ultrabizweb joined #gluster
14:34 necrogami joined #gluster
14:35 glusterbot News from newglusterbugs: [Bug 1201484] glusterfs-3.6.2 fails to build on Ubuntu Precise: 'RDMA_OPTION_ID_REUSEADDR' undeclared <https://bugzilla.redhat.com/show_bug.cgi?id=1201484>
14:43 roost joined #gluster
14:50 lpabon joined #gluster
14:57 theron joined #gluster
15:00 nishanth joined #gluster
15:01 T3 joined #gluster
15:05 corretico joined #gluster
15:06 spiekey joined #gluster
15:07 Slashman joined #gluster
15:09 7GHAA8KNH joined #gluster
15:10 B21956 joined #gluster
15:12 nbalacha joined #gluster
15:16 deZillium joined #gluster
15:20 kripper joined #gluster
15:21 _Bryan_ joined #gluster
15:25 raging-dwarf AndreeeCZ: depends on the local weather report
15:26 AndreeeCZ raging-dwarf, its already more than an hour
15:26 T3 joined #gluster
15:27 AndreeeCZ raging-dwarf, does that imply that i havent set it up properly?
15:30 harish joined #gluster
15:30 raging-dwarf AndreeeCZ: how much data is it?
15:31 raging-dwarf over what kind of connection?
15:31 AndreeeCZ one empty file
15:31 AndreeeCZ tcp
15:31 bennyturns joined #gluster
15:31 raging-dwarf an empty file should be over in less than a second
15:31 raging-dwarf what do the logs tell you?
15:35 AndreeeCZ good question
15:37 AndreeeCZ raging-dwarf, is it the /var/log/glusterfs/bricks/<sharedFile>.log ?
15:38 deZillium joined #gluster
15:39 AndreeeCZ no
15:39 plarsen joined #gluster
15:41 Gill joined #gluster
15:43 nangthang joined #gluster
15:43 AndreeeCZ raging-dwarf, reading from socket failed. Error (Transport endpoint is not connected)
15:44 AndreeeCZ raging-dwarf, http://pastie.org/10055215
15:45 raging-dwarf looks good to me
15:45 raging-dwarf i don't see anything about your disconnected endpoint
15:45 raging-dwarf State: Peer in Cluster (Connected)
15:46 AndreeeCZ yes
15:47 raging-dwarf so where is the not-connected endpoint?
15:48 nshaikh joined #gluster
15:48 AndreeeCZ /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
15:49 AndreeeCZ the time is weird, but that might be some timezone fun
15:49 AndreeeCZ ok but if it says connected
15:49 AndreeeCZ i have touched a file
15:50 raging-dwarf still no sync?
15:50 AndreeeCZ no sync
15:51 AndreeeCZ a different partition is mounted on the path what gluster should be syncing
15:51 AndreeeCZ can that be the cause of the problem?
15:56 tetreis joined #gluster
15:56 AndreeeCZ netstat -tap | grep glusterfsd says established to both first and second gluster
15:57 plarsen joined #gluster
16:00 deniszh joined #gluster
16:02 victori joined #gluster
16:08 soumya_ joined #gluster
16:11 Rydekull joined #gluster
16:17 JoeJulian AndreeeCZ: GlusterFS isn't a service that replicates bricks, it's a filesystem. You need to mount the glusterfs filesystem if you want to use the features of glusterfs. Writing directly to the brick is like writing directly to a block of a hard drive and expecting xfs to know what to do with what you wrote.
16:22 AndreeeCZ i see, so i was doing it a bit wrong there :)
16:23 spiekey joined #gluster
16:25 hagarth JoeJulian: maybe glusterbot should learn this answer for a faq?
16:35 glusterbot News from newglusterbugs: [Bug 1184626] Community Repo RPMs don't include attr package as a dependency <https://bugzilla.redhat.com/show_bug.cgi?id=1184626>
16:38 ecchcw joined #gluster
16:43 kanagaraj joined #gluster
16:47 kripper hi, I have a architecture question
16:47 kripper I saw that gluster's performance is limited to the bandwith to the peers
16:47 kripper in theory, if a client is accesing a volume that has a replica brick on the same machine
16:48 kripper it could write at the speed of the local disk
16:48 T3 joined #gluster
16:48 kripper and send delayed data packets to other peers, but this doesn't seem to be the case as the write speed seems to be limited by the bandwith with other peers
16:48 kripper can this kind of local-disk-write-speed  + delayed replication to other peers be configured?
16:48 kripper is geo-replication the answer?
16:49 kripper sometimes readonly replication is ok
16:49 AndreeeCZ JoeJulian, ok so.. how do i write to a brick so it gets replicated?
16:49 kripper it would be cool if a normal replica volume could be reconfigured to georeplication on the go
16:57 ckotil_ joined #gluster
17:01 theron joined #gluster
17:03 victori joined #gluster
17:04 ctria joined #gluster
17:11 stickyboy joined #gluster
17:14 AndreeeCZ JoeJulian, got it, works now. I didnt read the docs properly :)
17:16 aneil2 joined #gluster
17:16 Pupeno joined #gluster
17:17 soumya_ joined #gluster
17:18 prilly joined #gluster
17:19 Rapture joined #gluster
17:19 lalatenduM joined #gluster
17:19 rotbeard joined #gluster
17:19 aneil2 I have volume that had a replace brick process interrupted by a power outage.  Now I have a permanent volume task that I cannot seem to clear
17:20 aneil2 any idea how to purge failed volume tasks?
17:20 T3 joined #gluster
17:20 soulhunter joined #gluster
17:49 jobewan joined #gluster
17:54 T3 joined #gluster
18:00 lpabon joined #gluster
18:13 T3 joined #gluster
18:13 victori joined #gluster
18:14 john-qntm joined #gluster
18:18 john-qntm Hi I use 2 bricks in "replica 2". I added 3th brick and choosed options "replica 3". But the data is replicated to the new brick with speed 5-6mb/s. Its very very slow.... Volume brick 500gb
18:18 anrao joined #gluster
18:18 john-qntm How to increase the speed? To all the bricks have the same data?
18:20 john-qntm Anyone?
18:20 victori_ joined #gluster
18:21 gnudna stick around i am sure someone might have an answer
18:24 victori joined #gluster
18:30 aneil2 is it possible to downgrade from 3.6.2 to 3.5.x?
18:30 aneil2 an existing cluster I mean
18:46 spiekey joined #gluster
18:49 gnudna aneil2 for what it is worth in my lab env
18:49 gnudna i uninstalled gluster*
18:49 gnudna after i re-installed my volumes where present
18:50 gnudna you might want to backup /etc/glusterfs
18:50 bennyturns joined #gluster
18:50 gnudna obviously this is not guaranteed to work at all
18:50 gnudna just mentioning a by-product of my testing
18:52 T3 joined #gluster
18:54 aneil2 thanks, did you do one server at a a time and repeer or shutd them all down and reinstal all at once?
18:56 gnudna pretty much in my case i was re-installing so i stopped both servers and un-installed
18:56 gnudna but this was a lab so i had no worries of loosing data
18:56 gnudna contrary i wanted to destroy what i had there
18:57 aneil2 yes i have a production ovirt system running vms on this and it is causeing huge headaches
18:57 gnudna there are some very knowledgeable people around so i would wait for a better answer
18:57 gnudna my answer is like rolling the dice
18:57 aneil2 thanks though
18:58 gnudna btw what are your issues
18:58 gnudna im running kvm on glusterfs without to many issues so far
18:58 gnudna not using ovirt thought
18:58 gnudna just using virsh and creative mount points
19:05 aneil2 gluster daemon become unresponsive and when restarted it kicks of a heal. then VS freeze during the heal.
19:06 aneil2 err VMs not VS
19:07 gnudna gluster volume info $volume
19:07 aneil2 hmm can I paste
19:08 aneil2 not used to irc
19:08 gnudna pastebin.org to be safe
19:09 aneil2 http://pastebin.com/ALt6E01C
19:09 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:09 daMaestro joined #gluster
19:09 aneil2 http://ur1.ca/jzpyy
19:11 gnudna in my case http://fpaste.org/203395/27397071/
19:11 gnudna you can tell i am new
19:11 gnudna i have not deviated from the defaults much
19:11 aneil2 *nod*
19:12 gnudna server.allow-insecure: on
19:12 gnudna seems to be missing on yours
19:12 Bardack joined #gluster
19:12 gnudna which notes for 3.6.x claim it is required for kvm live migration to work
19:12 magamo joined #gluster
19:13 gnudna brb
19:14 aneil2 hmm it says that it is incompatioble with some of the clients
19:15 Bardack joined #gluster
19:16 Bardack joined #gluster
19:17 T3 joined #gluster
19:17 calisto joined #gluster
19:17 calisto joined #gluster
19:30 spiekey Hello!
19:30 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:30 spiekey i am running CentOS with Gluster 3.6.2 using a 3 Node Replica. Now onenode lost its brick (hard drive failed). So i replaced it, formatted it and mounted it back to its brick directory. Then i restarted my gluster service and „gluster volume heal RaidVolB info“ now shows: node07 => Status: Transport endpoint is not connected
19:30 spiekey any ideas?
19:31 gnudna do you have to peer probe from a trusted hsot?
19:31 gnudna host?
19:32 spiekey what do you mean?
19:32 gnudna is the host trusted in the cluster?
19:32 gnudna i know when i added my hosts i had to do a peer probe
19:33 gnudna im fairly new so keep that in mind
19:35 spiekey i dont know :) But i had a working 3 node cluster, then one of my bricks failed on that peer. So i did not add or remove a peer.
19:35 spiekey i have State: Peer in Cluster (Connected) for all of my nodes
19:38 deniszh joined #gluster
19:38 semiosis spiekey: my guess is that your new disk mount is lacking an xattr.  check the brick log on the server for details.  /var/log/glusterfs/bricks
19:39 semiosis transport endpoint not connected suggests the brick export daemon is not running.  it probably tried to start then put a failure message in the log
19:39 spiekey ok, because i replaced my /dev/sdb1 completly, so there wont be and xattr on it
19:39 spiekey its basically blank formatted ext4
19:40 spiekey i will just check the logs
19:41 bennyturns joined #gluster
19:43 spiekey only 2167967 log entries :)
19:44 spiekey http://fpaste.org/203429/73990741/
19:45 calum_ joined #gluster
19:57 ckotil joined #gluster
19:57 spiekey joined #gluster
19:57 spiekey oh sorrry, got disconnected. did i miss a reply?
20:01 theron joined #gluster
20:06 kkeithley_ ,,(ppa)
20:06 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
20:09 deniszh joined #gluster
20:13 devZer0 joined #gluster
20:14 spiekey E [posix.c:5626:init] 0-RaidVolB-posix: Extended attribute trusted.glusterfs.volume-id is absent
20:14 devZer0 hello, i`d like to know if gluster is suitable for my use-case. i was reading into docs for a while now, but i`m unsure if gluster is the right solution
20:14 devZer0 anybody willing to answer some questions?
20:17 necrogami joined #gluster
20:17 DV joined #gluster
20:18 plarsen joined #gluster
20:18 plarsen joined #gluster
20:19 spiekey just go ahead i guess
20:23 devZer0 thanks.  i have a 2-node webapp cluster. both nodes access a shared fs contained within a HA databas (dbfs), similar nfs mount. so both nodes have access to the same data under /mnt/dbfs
20:23 devZer0 this dbfs has some disadvantages, so we want to replace it with another shared filesystem. but we want to avoid single point of failure.
20:24 devZer0 so the question is, if i can put a glusterfs volume on node1 , a morror copy of that on node2 and then simulateneusly access the replicated volumen from both nodes.
20:24 devZer0 i.e. mount  brick1 on node1 locally and mount brick2 on node2 locally.
20:25 devZer0 so, whenever one node goes down, the other node has access to the current data
20:25 T3 joined #gluster
20:25 devZer0 woudl that work?
20:26 gnudna devZer0 doing that with kvm
20:26 gnudna works well enough for me
20:26 gnudna just make sure both db's do not write to the same file
20:26 gnudna can get ugly
20:26 devZer0 ah, good.
20:27 devZer0 yes, sure. will get split brain or such if there is a disconnect between the nodes
20:28 devZer0 is such setup being desribed somewhere? i need to discuss that with colleagues first if me made a prototype setup.  did not find something appropriate...
20:29 gnudna i have not seen many examples for this setup myself
20:30 gnudna but it does work well with my vm's
20:30 devZer0 does the mount on localhost only mount the local brick, or can the fuse/client process communicate with the two bricks on the two servers simultaneusly?
20:31 deniszh joined #gluster
20:31 gnudna i used glusterfs to mount the local brick
20:31 devZer0 ok. so when you even have vm`s with that, it sounds good. as our requirements for the shared files are low.
20:32 devZer0 it`s just some export/imports running on that, no live-data which is being accessed permanently
20:34 devZer0 one last question - if node2 goes down and comes back online and has not yet all changed data from node1 - what data will i "see" on the second node?
20:34 devZer0 is it for sure that if i read a recently change file (changed on node1) that i see all the changes node1 did?
20:38 glusterbot News from resolvedglusterbugs: [Bug 1163723] GlusterFS 3.6.2 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1163723>
20:38 gnudna you will see it doing a heal
20:39 gnudna aka replicating the data over
20:39 deniszh joined #gluster
20:39 gnudna im not sure if diff is by default to be honest thought
20:39 gnudna works well enoughfor my use case
20:40 gnudna i am sure i will do more tweaks as i get more comfortable with it
20:40 devZer0 ok, i think data volume is so low that this will be no problem, as resync should happen very fast....
20:40 devZer0 so this will be a valuable path to try, thank you gnudna for your input
20:40 gnudna yeah in general i am doing a few gigs
20:43 wdennis joined #gluster
20:44 devZer0 ah, i think i found some description which describes the setup i need: https://www.brightbox.com/docs/guides/glusterfs/
20:46 wdennis Hi folks - new Gluster admin here, inherited a 3.5.1 volume that's distributed/replicated... Is there a simple way to list the brick replication partners, or at least the other partner for a brick given a certain brick mountpoint's etc?
20:47 wdennis BTW, cannot find a 3.5 Admin Guide out there, best I can find is 3.3... Does newer exist somewhere?
20:47 o5k joined #gluster
20:47 devZer0 left #gluster
20:52 squizzi wdennis: the admin guide source can be found here https://github.com/gluster/glusterfs/tree/master/doc/admin-guide/en-US/markdown
20:53 roost joined #gluster
20:53 gnudna wdennis gluster volume info your-volume-name
20:53 wdennis squizzi: thx, is there a document organization layout somewhere there, or do I just not see it? (i.e. what order do the .md files go in?)
20:53 gnudna might show the info your looking for
20:54 gnudna later guys
20:54 squizzi wdennis: not that I know of documentation for gluster is housed in the gluster code base, my guess is they periodically merge those additions into the more official looking pdf releases
20:54 gnudna left #gluster
20:54 squizzi see: http://www.gluster.org/community/documentation/#GlusterFS_3.5
20:55 wdennis gnudna: no, this just lists all the bricks, does not indicate replic partners...
20:58 Alpinist joined #gluster
21:01 Gill left #gluster
21:08 wdennis Aha - /var/lib/glusterd/vols/<volname>/<volname>-fuse.vol -- this lists the bricks (volumes) and then lists the subvolumes (repl partner bricks)
21:10 deniszh joined #gluster
21:23 deniszh joined #gluster
21:23 wkf joined #gluster
21:24 ctria joined #gluster
21:26 spiekey joined #gluster
21:26 JoeJulian @brick order
21:26 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
21:28 Alpinist joined #gluster
21:34 wdennis JoeJulian: hard to know replica pairs if you didn't do the 'volume create' command ;)
21:34 JoeJulian volume info works. They're in order.
21:34 xiu joined #gluster
21:35 wdennis JoeJulian: I figured there was a way for the Gluster system to tell me post-facto, and /var/lib/glusterd/vols/<volname>/<volname>-fuse.vol seems to be it
21:36 elico joined #gluster
21:36 wdennis JoeJulian: I see - because the bricks are created via the command you informed me of, they end up in 'volume info' as 'Brick0', 'Brick1', etc
21:37 JoeJulian (1,2,3,etc) but yes.
21:38 JoeJulian So with a replica 2, brick 1+2, 3+4, 5+6, etc. replica 3 would, of course, be 1+2+3, 4+5+6, etc.
21:38 wdennis JoeJulian: ah, I see
21:38 wdennis was just trying to confirm without guessing :)
21:39 wdennis The reason being is that we need to remove bricks, as we are going to decomm older servers that are hosting bricks
21:39 JoeJulian yep
21:39 JoeJulian It wouldn't let you remove the wrong pair.
21:40 wdennis ...and we aren't using all the space of the Gluster volume anyhow, so we can tolerate the space decrease
21:40 deniszh joined #gluster
21:40 wdennis JoeJulian: oh, that's good :)  easy to make a typo now and then...
21:40 JoeJulian +1
21:41 JoeJulian dammit.. I should have said, "+!" it would have been funnier.
21:41 wdennis JoeJulian: :)
21:45 Pupeno_ joined #gluster
21:47 shaunm joined #gluster
21:51 tessier joined #gluster
22:05 T3 joined #gluster
22:08 glusterbot News from resolvedglusterbugs: [Bug 1138567] Disabling Epoll (using poll()) - fails SSL tests on mount point <https://bugzilla.redhat.com/show_bug.cgi?id=1138567>
22:15 badone joined #gluster
22:28 kovshenin joined #gluster
22:35 osc_khoj joined #gluster
22:44 osc_khoj HI, guys, I installed 3 node glusterfs env.
22:44 osc_khoj I want to know whether or not this archetcture is correct.
22:44 osc_khoj especially, how to set the geo-replication volume
22:44 osc_khoj env :     ubuntu 12.04 LTS + glusterfs 3.6.1
22:44 osc_khoj 2node Distributed + Replication volume + 1 node geo-replication
22:44 osc_khoj server name :   KR01/KR02 +  JP01 on AWS
22:44 osc_khoj root@KR01:~# gluster volume status
22:44 osc_khoj Status of volume: krprdnas
22:44 osc_khoj Gluster process                              Port     Online     Pid
22:44 osc_khoj ------------------------------------------------------------------------------
22:44 glusterbot osc_khoj: ----------------------------------------------------------------------------'s karma is now -2
22:44 osc_khoj Brick KR01:/disk01               49152     Y     1581
22:44 osc_khoj Brick KR02:/disk01               49152     Y     1570
22:44 osc_khoj Brick KR01:/disk02               49153     Y     1586
22:44 osc_khoj Brick KR02:/disk02               49153     Y     1575
22:44 osc_khoj NFS Server on localhost                         2049     Y     1591
22:44 osc_khoj Self-heal Daemon on localhost                    N/A     Y     1599
22:44 osc_khoj NFS Server on KR01                   2049     Y     1585
22:44 osc_khoj Self-heal Daemon on 192.168.0.240               N/A     Y     1580
22:44 osc_khoj root@JP01:~# gluster volume status
22:44 osc_khoj Status of volume: krprdnas-bk
22:44 osc_khoj Gluster process                              Port     Online     Pid
22:44 osc_khoj ------------------------------------------------------------------------------
22:44 glusterbot osc_khoj: ----------------------------------------------------------------------------'s karma is now -3
22:44 osc_khoj Brick JP01:/disk01               49152     Y     1554
22:44 osc_khoj Brick JP01/disk02               49153     Y     1564
22:44 osc_khoj NFS Server on localhost                         2049     Y     1569
22:44 osc_khoj root@KR1-PRD-FS01:~# gluster volume geo-replication status
22:44 osc_khoj MASTER NODE     MASTER VOL    MASTER BRICK           SLAVE                 STATUS     CHECKPOINT STATUS    CRAWL STATUS
22:44 osc_khoj -------------------------------------------------------------------------------------------------------------------------------
22:44 glusterbot osc_khoj: -----------------------------------------------------------------------------------------------------------------------------'s karma is now -1
22:45 osc_khoj KR01    krprdnas      /disk01    192.168.0.139::krprdnas-bk    Active     N/A                  Changelog Crawl
22:45 osc_khoj KR01    krprdnas      /disk02    192.168.0.139::krprdnas-bk    Active     N/A                  Changelog Crawl
22:45 osc_khoj KR02    krprdnas      /disk01    192.168.0.139::krprdnas-bk    Passive    N/A                  N/A
22:46 osc_khoj In JP01
22:46 osc_khoj gluster volume info
22:46 osc_khoj
22:46 osc_khoj Volume Name: krprdnas-bk
22:46 osc_khoj Type: Distribute
22:47 JoeJulian @paste
22:47 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
22:47 JoeJulian please use one of those instead of scrolling all our conversations into oblivion.
22:51 osc_khoj ok, I wil do it,,sorry..
22:56 weemoog joined #gluster
23:00 osc_khoj @paste http://paste.ubuntu.com/10686616/
23:01 osc_khoj please <osc_khoj> HI, guys, I installed 3 node glusterfs env.
23:01 osc_khoj <osc_khoj> I want to know whether or not this archetcture is correct.
23:01 osc_khoj <osc_khoj> especially, how to set the geo-replication volume
23:01 osc_khoj <osc_khoj> env :     ubuntu 12.04 LTS + glusterfs 3.6.1
23:01 osc_khoj <osc_khoj> 2node Distributed + Replication volume + 1 node geo-replication
23:01 osc_khoj <osc_khoj> server name :   KR01/KR02 +  JP01 on AWS
23:02 osc_khoj Would you mind saying some recommand?
23:02 JoeJulian Looks right to me.
23:03 osc_khoj In case I use the Distributed-Replicate in 2 node, and In case of geo-replication I use Distrbuted type.
23:04 JoeJulian There's nothing wrong with that if it suits your needs.
23:04 osc_khoj but, normally there are no example in case of disbributed - geo replication...Is there any doc about it?
23:05 osc_khoj Thanks, JoeJulina..
23:05 osc_khoj ^^
23:06 JoeJulian It doesn't matter what configuration you use for your slave volume. That choice has no effect on the function of geo-replication.
23:08 weemoog Hello, I'd like to know how can I fix a specific glusterfsd's port of bricks for a volume - Example : using port 49200 instead of 49155 for the volume "examplevol" ? thank you ^^
23:08 d-fence joined #gluster
23:09 T3 joined #gluster
23:09 osc_khoj If I want to construct distrubted-replication volume in geo-replication, Is it same as distributed in geo-replication?..by gluster volume geo-replication krprdnas $GEO_IP::krprdnas-bk create push-pem force
23:09 semiosis weemoog: i dont think you can do that
23:10 JoeJulian osc_khoj: yes, it's the same.
23:11 osc_khoj Thanks JoeJulian../
23:12 weemoog I tried to modify ports on /var/lib/glusterd/vols/gfsvol1/bricks/10.4.7.200:-srv-gluster, but it doesn't work. thank you anyway
23:14 JoeJulian weemoog: You could probably do something with hooks to either effect your firewall or to create port-forwarding rules with iptables...
23:14 weemoog Sure, I've already looked around ^^
23:18 julim joined #gluster
23:23 julim joined #gluster
23:27 MugginsM joined #gluster
23:44 T3 joined #gluster
23:53 T3 joined #gluster
23:58 Gill joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary