Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 shortdudey123 joined #gluster
00:30 badone_ joined #gluster
00:40 wonko JoeJulian: ok, thanks for the info! I'll take a poke at it. I just found out there are NetApps available so I'll probably punt to those for this task. gluster is still on my radar though.
00:45 vimal joined #gluster
00:52 msciciel joined #gluster
01:01 zhangjn joined #gluster
01:01 EinstCra_ joined #gluster
01:06 shyam joined #gluster
01:21 armyriad joined #gluster
01:25 Lee1092 joined #gluster
01:25 delhage joined #gluster
01:45 calavera joined #gluster
01:45 G-Doge joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 nbalacha joined #gluster
01:56 gildub joined #gluster
02:01 zhangjn joined #gluster
02:13 G-Doge Hello, is anyone having problems with glusterfind in 3.7.4 being very slow and and cpu-intensive??
02:14 nangthang joined #gluster
02:40 rafi joined #gluster
02:42 atinm joined #gluster
02:52 bharata-rao joined #gluster
03:03 shaunm joined #gluster
03:08 zhangjn joined #gluster
03:11 gildub joined #gluster
03:22 [o__o] joined #gluster
03:22 vmallika joined #gluster
03:26 yangfeng joined #gluster
03:28 semiosis joined #gluster
03:28 jatb joined #gluster
03:29 [7] joined #gluster
03:29 fubada joined #gluster
03:30 Intensity joined #gluster
03:31 nishanth joined #gluster
03:40 stickyboy joined #gluster
03:44 haomaiwa_ joined #gluster
03:48 rafi joined #gluster
03:52 DV joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 atalur joined #gluster
04:03 harish joined #gluster
04:04 maveric_amitc_ joined #gluster
04:11 itisravi joined #gluster
04:13 RameshN joined #gluster
04:16 kanagaraj joined #gluster
04:17 ppai joined #gluster
04:30 itisravi_ joined #gluster
04:30 sakshi joined #gluster
04:30 gem joined #gluster
04:30 jiffin joined #gluster
04:33 hagarth joined #gluster
04:38 itisravi_ joined #gluster
04:40 yazhini joined #gluster
04:43 dusmant joined #gluster
04:47 zhangjn joined #gluster
04:49 neha_ joined #gluster
04:54 dusmant joined #gluster
04:55 DV__ joined #gluster
04:58 rafi joined #gluster
05:06 ndarshan joined #gluster
05:09 atalur joined #gluster
05:12 RameshN joined #gluster
05:13 ramteid joined #gluster
05:14 pppp joined #gluster
05:15 TvL2386 joined #gluster
05:18 zhangjn joined #gluster
05:33 calavera joined #gluster
05:35 hagarth joined #gluster
05:36 zhangjn joined #gluster
05:38 Bhaskarakiran joined #gluster
05:39 zhangjn joined #gluster
05:46 sakshi joined #gluster
05:46 rjoseph joined #gluster
05:48 ashiq joined #gluster
05:49 shubhendu joined #gluster
05:50 Manikandan joined #gluster
05:53 zhangjn joined #gluster
05:54 Humble joined #gluster
05:55 kdhananjay joined #gluster
05:57 Humble joined #gluster
06:05 raghu joined #gluster
06:11 fubada joined #gluster
06:12 7GHABE4ZH joined #gluster
06:13 atalur joined #gluster
06:17 skoduri joined #gluster
06:17 zhangjn joined #gluster
06:18 kdhananjay joined #gluster
06:19 kotreshhr joined #gluster
06:19 haomaiwa_ joined #gluster
06:20 mhulsman joined #gluster
06:23 hagarth joined #gluster
06:26 itisravi joined #gluster
06:26 vmallika joined #gluster
06:27 jtux joined #gluster
06:28 zhangjn joined #gluster
06:29 kshlm joined #gluster
06:33 Bhaskarakiran joined #gluster
06:33 hchiramm joined #gluster
06:34 zhangjn joined #gluster
06:35 rafi left #gluster
06:39 zhangjn joined #gluster
06:40 [Enrico] joined #gluster
06:44 karnan joined #gluster
06:46 nangthang joined #gluster
06:51 LebedevRI joined #gluster
06:54 deepakcs joined #gluster
06:59 yangfeng joined #gluster
07:12 poornimag joined #gluster
07:13 thoht_ joined #gluster
07:13 thoht_ for now; i got a gluster volume replica 3 between 2 fast nodes and 1 slow node
07:14 thoht_ as the devices are not in same Datacenter, it is done through a vpn lan between the 3 devices
07:15 thoht_ my question is about georeplication, does it worth to remive the brick3 (slow device) and have replica 2 then add a georeplication with the third node ?
07:17 Manikandan joined #gluster
07:36 deniszh joined #gluster
07:37 kdhananjay1 joined #gluster
07:41 itisravi_ joined #gluster
07:48 Chr1st1an Quick question is gluster 3.7 dependent on bricks being setup with LVM?
07:48 Chr1st1an You wont get snapshots without it but anything else that might cause issues ?
07:49 atinm joined #gluster
07:49 Manikandan joined #gluster
07:50 haomaiwang joined #gluster
07:56 ivan_rossi joined #gluster
08:00 akik joined #gluster
08:01 kshlm joined #gluster
08:06 Norky joined #gluster
08:07 mhulsman joined #gluster
08:17 maveric_amitc_ joined #gluster
08:17 RayTrace_ joined #gluster
08:22 Guest96084 joined #gluster
08:24 rastar_afk thoht_: yes, or better would be have a 3rd brick or arbiter in the same datacenter and create georep with the remote node.
08:27 thoht_ rastar: i don't have unfortunately the ability to have a 3rd device in same DC
08:27 thoht_ for now; i ve 3 bricks, 2 nodes in same DC and a remote 1
08:28 thoht_ the written are slow due to latency with the third one
08:28 thoht_ that s why i want to modify a bit the system. so to sum-up; the steps would be:
08:28 thoht_ 1/ set the third node in maintenance (proxmox node cluster)
08:29 thoht_ 2/ remove the third brick to be in replica 2 situation
08:29 thoht_ 3/ set geolocation between node2 (or node1) against the third remote node
08:29 thoht_ 4/ remove maintenance of third node
08:30 thoht_ i won t use the remote node for anything; it will act only as a backup
08:32 jiffin Chr1st1an: brick setup with LVM is must for snapshot feature as far as my understanding
08:33 Chr1st1an But not for anything else?
08:34 Chr1st1an We don't use the snapshot feature atm with the current setup , so upgrading should not cause any issues hopefully
08:34 Chr1st1an Just taken from Redhat Storage 3.1U1 - Upgrade Req: Each brick must be independent thinly provisioned logical volume(LV).
08:35 ctria joined #gluster
08:35 RayTrace_ joined #gluster
08:36 auzty joined #gluster
08:36 Chr1st1an If im also correct the replace brick command is greatly improved in 3.7 over 3.4 release
08:37 shubhendu joined #gluster
08:37 RayTrace_ joined #gluster
08:37 EinstCrazy joined #gluster
08:37 Chr1st1an So upgrading and then migration to LVM would be the fastest way to get it done , but that depends on that non LVM volumes will work with 3.7. ( I’m sure they will )
08:37 ramky joined #gluster
08:39 nbalacha joined #gluster
08:44 zhangjn joined #gluster
08:45 fubada joined #gluster
08:47 haomaiwa_ joined #gluster
08:48 Slashman joined #gluster
08:56 RayTrace_ joined #gluster
08:56 sripathi joined #gluster
08:59 RayTrace_ joined #gluster
09:05 mhulsman joined #gluster
09:05 haomaiwang joined #gluster
09:11 poornimag joined #gluster
09:11 spalai joined #gluster
09:18 ppai joined #gluster
09:24 RayTrace_ joined #gluster
09:31 shubhendu_ joined #gluster
09:39 stickyboy joined #gluster
09:39 haomaiwa_ joined #gluster
09:46 ppai joined #gluster
09:47 haomaiwa_ joined #gluster
09:52 marlinc joined #gluster
09:58 RayTrace_ joined #gluster
10:01 kxseven joined #gluster
10:04 arcolife joined #gluster
10:07 Philambdo joined #gluster
10:07 dmnchild joined #gluster
10:11 jiffin1 joined #gluster
10:16 dmnchild Soo, I am sure this has been asked, but for the life of me I cant find any online documentation or fixes.. on how to achieve: gluster volume set cluster ssl.cipher-list HIGH:!SSLv2
10:16 ppai joined #gluster
10:16 dmnchild debian 8, gluster 3.7.2
10:16 itisravi joined #gluster
10:17 dmnchild if anyone knows some solid docs/help sites, would be great ;(
10:17 lalatenduM joined #gluster
10:18 lalatenduM joined #gluster
10:27 poornimag joined #gluster
10:27 najib joined #gluster
10:28 danielbellantuon joined #gluster
10:28 skoduri joined #gluster
10:28 gcivitella joined #gluster
10:32 danielbellantuon joined #gluster
10:34 danielbellantuon Hi guys!
10:36 danielbellantuon I'm trying to get configuration options of a specific volume, but i didn't find the command to do it
10:36 danielbellantuon Do you have any idea?
10:40 dmnchild Do you mean like options you set other than default?
10:40 dmnchild kinda noobish myself, but I think everything shows up with
10:40 dmnchild gluster volume info
10:43 aravindavk joined #gluster
10:43 thoht_ to remove a brick from a replica3 and to go to replica2 and to remove the brick node3, it is correct syntax:
10:43 thoht_ gluster volume remove-brick pve-gluster-share replica 2 node3:/shared/vm/data force
10:43 thoht_ i m not sure about "force"
10:44 deepakcs dmnchild: whats the issue with setting ssl cpiher-list option ? your Q is not clear
10:44 harish_ joined #gluster
10:44 dmnchild -bash: !SSLv2: event not found
10:44 dmnchild when I try to enable that
10:45 thoht_ back slash the !
10:45 thoht_ dmnchild: try \!SSLv2
10:45 thoht_ otherwise bash is interpreting !
10:45 deepakcs gluster volume set cluster ssl.cipher-list "HIGH:!SSLv2"
10:45 deepakcs dmnchild: ^^
10:45 thoht_ or set the double quote
10:46 danielbellantuon <dmnchild>: thanks a lot!
10:46 dmnchild quotes didnt work, but the backslash did. props, ty much
10:47 dmnchild seems this should be documented since so many peoples tutorials have it without that :|
10:47 itisravi_ joined #gluster
10:48 ppai joined #gluster
10:52 ira joined #gluster
10:57 JoeJulian fwiw, 'single quotes' should have worked, just not "double".
10:59 thoht_ i finally removed a brick and i m in replica2 now instead of replica3
10:59 thoht_ but the node3 still show info when i do "gluster volume info" with info about the 2 other brick node1 and node2
10:59 thoht_ how to remove that ?
11:00 EinstCrazy joined #gluster
11:03 JoeJulian peer detach
11:04 thoht_ nice JoeJulian thanks
11:04 thoht_ because i did a gluster volume stop VOL
11:04 thoht_ and it stoped it on the 2 other nodes :(
11:04 thoht_ (i was on node3)
11:04 thoht_ it did the work; thanks a lot
11:05 thoht_ so now let s try the georeplication
11:07 lpabon joined #gluster
11:10 itisravi joined #gluster
11:13 kaushal_ joined #gluster
11:17 ppai joined #gluster
11:17 ppai_ joined #gluster
11:21 cyberbootje joined #gluster
11:28 thoht_ trying to setup geo-rep but it failed on following command : gluster volume geo-replication pve-gluster-share  node3::pve-gluster-geo create push-pem force
11:28 thoht_ nable to fetch slave volume details. Please check the slave cluster and slave volume
11:28 thoht_ i did the ssh-key actions as well and can connect to node3 as root without password
11:29 thoht_ i m following https://access.redhat.com/documentation/en-US/​Red_Hat_Storage/3.1/html/Administration_Guide/​sect-Preparing_to_Deploy_Geo-replication.html
11:30 glusterbot Title: 12.3. Preparing to Deploy Geo-replication (at access.redhat.com)
11:30 thoht_ any clue is welcome
11:39 ppai joined #gluster
11:47 spalai left #gluster
11:52 jiffin thoht_,: aravindavk, kotreshhr may help you
11:56 side_control joined #gluster
11:58 kotreshhr thoht_: Hey could run this cmd  "bash -x /usr/local/libexec/glusterfs/gverify.sh <master vol> root <slave hostname> <slave vol> /tmp/log"
11:59 kotreshhr thoht_: and check the values of slave_disk_size and slave_version
12:01 thoht_ karnan: ash: /usr/local/libexec/glusterfs/gverify.sh: No such file or directory
12:01 kotreshhr thoht_: It should be in /usr/libexec/glusterfs then
12:01 thoht_ neither
12:02 jiffin1 joined #gluster
12:02 kotreshhr thoht_: Which platform are running on?
12:02 thoht_ karnan: debian Jessie 8
12:03 thoht_ ok i found it into /usr/lib/x86_64-linux-gnu/glusterfs/gverify.sh
12:03 kotreshhr thoht_: Oh you are running on ubuntu then..
12:03 thoht_ karnan: not unbutun
12:03 thoht_ i m using this repo : deb http://httpredir.debian.org/debian/ jessie-backports main contrib non-free
12:03 glusterbot Title: Index of /debian/ (at httpredir.debian.org)
12:04 thoht_ running on 8.2 Debian
12:04 kotreshhr thoht_: ok, yeah debian.
12:04 kotreshhr thoht_: Could you run and check those values?
12:04 thoht_ i guess the command is 2> instead of > /tmp/log
12:04 thoht_ isnt it ?
12:05 ppai joined #gluster
12:05 thoht_ karnan: this is the log http://pastebin.com/JJjbc2z7
12:05 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:06 thoht_ kotreshhr: sorry it was for you the previous message
12:06 kotreshhr thoht_: Yeah got it:)
12:08 kotreshhr thoht_: the command, script expects some log file. So......./tmp/log 2> /tmp/output
12:08 kotreshhr thoht_: Paste the output of /tmp/output
12:09 kotreshhr thoht_: sorry, got it
12:09 thoht_ >/tmp/log 2>&1 ?
12:09 thoht_ still needed ?
12:11 kotreshhr thoht_: no!
12:12 kotreshhr thoht_: From the logs, I see slave volume could not be mounted.
12:12 kotreshhr thoht_: Could you mount the slave volume manually and check why is it failing?
12:13 thoht_ kotreshhr: on the slave there is nothing
12:13 thoht_ i mean i didn't create anything
12:13 thoht_ i m following the doc
12:14 thoht_ it diddn't say to create any volume i m confused
12:14 thoht_ previously i got 3 bricks in replica 3
12:14 thoht_ i just removed node3 and now i m in replica2
12:14 thoht_ and i want to setup georep with node3
12:15 kotreshhr thoht_: Oh, you should have two gluster volumes one being master and other being slave setup before geo-rep.
12:15 thoht_ (because node3 is in another DC and it creates too much latency when it is in brick3)
12:15 kotreshhr thoht_: I am confused then node3::pve-gluster-geo is not a gluser vol created on node3?
12:16 thoht_ nope
12:16 thoht_ i just followed the doc
12:16 thoht_ :/
12:16 thoht_ https://access.redhat.com/documentation/en-US/​Red_Hat_Storage/3.1/html/Administration_Guide/​sect-Preparing_to_Deploy_Geo-replication.html
12:16 glusterbot Title: 12.3. Preparing to Deploy Geo-replication (at access.redhat.com)
12:16 thoht_ and https://gluster.readthedocs.org/en/latest​/Administrator%20Guide/Geo%20Replication/
12:16 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.org)
12:16 thoht_ i m totaly CONFUSED
12:16 thoht_ :)
12:17 thoht_ so what should i do ?
12:17 thoht_ for now, i ve a running gluster volume replica 2 with node1 and node2. i want to add node3 as a slave with geo-rep
12:17 thoht_ and i ve still the data on node3 when it was a replica 3
12:18 kotreshhr thoht_: You need to create slave gluster volume. Geo-rep will sync the data from master volume to slave volume. In doc, though its not mentioned specifically, It does say this "The following are prerequisites for deploying geo-replication:The master and slave volumes must be of same version of Red Hat Gluster Storage instances"
12:19 kotreshhr thoht_: which means slave gluster volume should be created and of the same versions
12:19 thoht_ what is same verison .?
12:19 thoht_ version of glusterfs ?
12:19 kotreshhr Yes!
12:20 thoht_ node1 node2 node3 are 3.7.4
12:20 thoht_ all same
12:20 kshlm joined #gluster
12:20 kotreshhr thoht_: Ok ok, node3 was part of replica brick earlier. I got you now.
12:20 thoht_ kotreshhr: yes
12:21 kotreshhr thoht_: So node1 and node2 with replica 2 gluster volume will be master volume and you want node3::brick  to be slave ?
12:22 thoht_ kotreshhr: exactly !!!
12:22 kotreshhr thoht_: You need to create gluster volume on node3! Before that, remove node3 from cluster.
12:23 thoht_ i already removed node3 from cluster
12:23 kotreshhr thoht_: then create a gluster volume. the size of slave gluster volume should not be less than master volume.
12:23 thoht_ kotreshhr: http://pastebin.com/3YxeV5cg
12:23 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:23 poornimag joined #gluster
12:23 thoht_ kotreshhr: i ve still the folder present in a lv with all the data on node3
12:25 kotreshhr thoht_: Geo-rep expects the new slave volume should empty!
12:25 skoduri joined #gluster
12:26 [Enrico] joined #gluster
12:26 thoht_ kotreshhr: ok i can empty it
12:26 haomaiwa_ joined #gluster
12:27 kotreshhr thoht_: Cool, create a fresh gluster volume on node3 with size of it being equal to or more than master volume and follow the procedure. It should work.
12:27 thoht_ ok i got a new folder dedicated : /shared/geo/data
12:28 thoht_ gluster volume create shared-geo node3:/shared/geo/data
12:29 thoht_ ok kotreshhr , do i start it ?
12:29 rmgroth has anyone done gluster bricks with an LSI 9xxx series and Cachecade?
12:30 shyam joined #gluster
12:30 spcmastertim joined #gluster
12:30 kotreshhr thoht_: yes, start it and attempt geo-rep create command
12:31 thoht_ ode3::shared-geo is not empty
12:31 thoht_ but it is
12:31 thoht_ i mean there is hidden file .glusterfs and .trashcan
12:32 kotreshhr Are you sure it is empty other than .trashcan and .glusterfs?
12:33 thoht_ kotreshhr: http://pastebin.com/yMycGs0g
12:33 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:35 kotreshhr thoht_: I will see to it. But now you can use force option at the end try create
12:35 thoht_ Creating geo-replication session between pve-gluster-share & node3::shared-geo has been successful
12:35 kotreshhr thoht_: Start the geo-rep session
12:36 thoht_ kotreshhr: from where ?
12:37 kotreshhr thoht_: From the node which you ran create command
12:38 thoht_ kotreshhr: volume start: shared-geo: failed: Volume shared-geo does not exist
12:39 thoht_ i did : gluster volume start shared-geo
12:40 kotreshhr thoht_: no no...
12:40 thoht_ kotreshhr: gluster volume geo-replication pve-gluster-share node3::shared-geo  start ?
12:40 kotreshhr thoht_: Follow the doc, now you need to create a gluster shared storage on master cluster
12:41 kotreshhr thoht_: and enable the config use_meta_volume to true
12:41 kotreshhr thoht_: then do "gluster volume geo-replication pve-gluster-share node3::shared-geo  start"
12:42 thoht_ geo-replication config updated successfully
12:42 thoht_ ok for the meta volume true
12:42 thoht_ ok for the start also
12:43 thoht_ kotreshhr: this is the statuas: http://fpaste.org/279496/12983144/
12:43 glusterbot Title: #279496 Fedora Project Pastebin (at fpaste.org)
12:43 thoht_ is it fine ?
12:43 thoht_ i can see Faulty
12:45 unclemarc joined #gluster
12:45 kotreshhr thoht_: No, it's not fine. It went to Faulty because you didn't setup shared storage.
12:45 kotreshhr https://access.redhat.com/documentation/en-US/Red_​Hat_Storage/3.1/html/Administration_Guide/chap-Man​aging_Red_Hat_Storage_Volumes-Shared_Volume.html
12:45 glusterbot Title: 9.8. Setting up Shared Storage Volume (at access.redhat.com)
12:45 kotreshhr Yes, set that option!
12:46 kotreshhr thoht_: It's a volume set option
12:47 thoht_ but i dit the use_meta_volume
12:47 dblack joined #gluster
12:47 thoht_ ok it is a new link
12:48 haomaiwang joined #gluster
12:48 kotreshhr thoht_: You should set this option and then do "use_meta_volume' thing
12:49 thoht_ oh my ! i did only the use_meta_volume
12:49 thoht_ is it possible to fix that ?
12:49 kotreshhr thoht_: It's ok, that won't be a problem
12:49 thoht_ kotreshhr: so i can run: gluster volume geo-replication pve-gluster-share  node3::shared-geo config  cluster.enable-shared-storage enable ?
12:50 kotreshhr thoht_: I just said, usually that's the order to be followed.
12:50 kotreshhr thoht_: Hey no no no
12:50 kotreshhr thoht_: It's not geo-rep config command
12:51 kotreshhr thoht_: gluster vol set all cluster.enable-shared-storage enable
12:52 thoht_ volume set: success
12:53 thoht_ but status still show Faulty
12:54 kotreshhr thoht_: wait for some time
12:54 kotreshhr thoht_: geo-rep will pick up the status
12:54 kotreshhr thoht_: check now
12:54 thoht_ still faulty
12:54 thoht_ no change
12:54 thoht_ should i restart the volume ?.
12:54 kotreshhr thoht_: Could you stop the geo-rep and start?
12:55 _maserati_ joined #gluster
12:55 thoht_ kotreshhr: ok restarted
12:55 haomaiwa_ joined #gluster
12:56 kotreshhr thoht_: status?
12:57 thoht_ still faulty
12:59 thoht_ kotreshhr: what should i do ?
13:01 kotreshhr thoht_: could you check whether gluster_shared_storage is mounted?
13:01 haomaiwa_ joined #gluster
13:02 thoht_ kotreshhr:  gluster_shared_storage ?
13:03 thoht_ the brick2 ?
13:03 kotreshhr thoht_: gluster_shared_storage
13:04 kotreshhr thoht_: yes!
13:04 thoht_ so it is  pve-gluster-share
13:04 thoht_ it is indeed mounted locally on node1 and node2
13:04 thoht_ like localhost:pve-gluster-share   300G   95G  206G  32% /mnt/pve/gluster-share
13:04 kotreshhr thoht_: No, the volume set option you did, will create new volume and mount it
13:04 thoht_ oh
13:04 scubacuda joined #gluster
13:04 kotreshhr thoht_: I wanted to confirm that
13:06 danielbellantuon joined #gluster
13:07 kotreshhr thoht_: mount | grep shared_storage
13:07 thoht_ the new volume name is shared-geo on node3
13:08 thoht_ what is shared_storage ?
13:08 kotreshhr thoht_: I think you are confused.
13:08 thoht_ certainly
13:08 kotreshhr thoht_: To store configuration values, geo-rep expects a third volume only used for storing lock files...
13:09 thoht_ a third volume ? ooohhh
13:09 thoht_ but i didn't create it
13:09 thoht_ isnt it
13:09 kotreshhr thoht_: cluster.enable-shared-storage option creates this third volume and mount it for you.
13:10 danielbellantuon Somebody know where i can find a documentation on how to work read cache of Gluster?
13:10 julim joined #gluster
13:10 thoht_ how can i see the new volume created ?
13:12 thoht_ kotreshhr: gluster vol set all cluster.enable-shared-storage enable <== This command created a new volume ?
13:12 kotreshhr thoht: gluster vol info not showing it?
13:12 kotreshhr thoht_: Yes
13:13 thoht_ kotreshhr: gluster vol info shows me only 1 volume
13:13 thoht_ http://fpaste.org/279513/91480014/
13:13 glusterbot Title: #279513 Fedora Project Pastebin (at fpaste.org)
13:13 kotreshhr thoht_: I have to check whether they mask the shared volume in volume info.
13:14 kotreshhr thoht_: Could you paste the geo-rep logs : /var/log/glusterfs/geo-replicati​on/<master-vol-name>/*.slave.log
13:18 thoht_ kotreshhr: http://fpaste.org/279515/91506514/
13:18 glusterbot Title: #279515 Fedora Project Pastebin (at fpaste.org)
13:19 kotreshhr thoht_: For now stop geo-rep session!
13:19 kotreshhr thoht_: there is know issue in debian platform.
13:20 kotreshhr thoht_: Could you symlink /usr/libexec/gsyncd to the path where gsyncd is present..
13:20 thoht_ ok it is stoped
13:20 bennyturns joined #gluster
13:21 kotreshhr thoht_: Before that please check /var/lib/glusterd/geo-replication/
13:21 thoht_ /usr/libexec don t exist, i create the folder
13:21 thoht_ yes ?
13:21 kotreshhr thoht_: cat common_secret.pem.pub
13:21 shyam joined #gluster
13:21 thoht_ kotreshhr: ok there are plenty keys
13:22 thoht_ command="/usr/libexec/glusterfs/gsyncd"
13:22 thoht_ ok the path is wrong
13:23 kotreshhr thoht_: Yes, create a symlink to actual location from /usr/libexec/glusterfs/gsyncd
13:23 thoht_ ok done
13:23 thoht_ on all nodes then
13:23 thoht_ now, shall i restart the volume ?
13:24 kotreshhr thoht_: Yes, do it on all nodes, especially on node3 (slave node)
13:24 thoht_ ok it is sarted
13:24 thoht_ still Faulty
13:24 thoht_ but i can see in process list /usr/bin/python /usr/lib/x86_64-linux-gnu/gluste​rfs/python/syncdaemon/gsyncd.py --path=/shared/vm/data  --monitor -c /var/lib/glusterd/geo-replication/pve-gl​uster-share_node3_shared-geo/gsyncd.conf --iprefix=/var :pve-gluster-share --glusterd-uuid=c7fcfc42-f​8f7-45c9-871d-3effa07a5db4 node3::shared-geo
13:24 kotreshhr thoht_: now once more geo-rep logs. I need logs at the end.
13:25 thoht_ ok
13:26 thoht_ kotreshhr: http://fpaste.org/279521/44915549/
13:26 glusterbot Title: #279521 Fedora Project Pastebin (at fpaste.org)
13:26 thoht_ Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
13:26 thoht_ this is strange
13:27 kotreshhr thoht_: sry, I think you should symlink /usr/libexec/glusterfs to /usr/lib/x86_64-linux-gnu/glusterfs/
13:27 kotreshhr thoht_: Did you do on node3 (slave) as well?
13:27 thoht_ /usr/libexec/glusterfs/gsyncd -> /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
13:27 thoht_ /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd is the real file
13:27 thoht_ it is done on the 3 nodes yes
13:28 kotreshhr Yes
13:28 kotreshhr thoht_: The glusterfs directory does contain other python scripts which geo-rep uses.
13:28 thoht_ oh
13:29 thoht_ maybe  i should symlink the folder then
13:29 kotreshhr thoht_: symlink that directory and check
13:29 thoht_ /usr/lib/x86_64-linux-gnu/glusterfs/ to /usr/libexec/glusterfs
13:29 thoht_ ok .?
13:29 kotreshhr thoht_: /usr/libexec/glusterfs -> /usr/lib/x86_64-linux-gne/glusterfs
13:30 kotreshhr thoht_: yes
13:30 thoht_ ok done
13:30 kotreshhr thoht_: stop and start geo-rep
13:31 thoht_ still faulty after stop and start
13:31 kotreshhr thoht_: logs still say non-existing gsyncd ?
13:32 thoht_ 2015-10-15 15:31:46.883336] E [resource(/shared/vm/data):226:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
13:32 thoht_ yes
13:32 neofob joined #gluster
13:32 thoht_ Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-bYq77N/027c1​a64cb678f507c82aa693f8bd20e.sock root@node3 /nonexistent/gsyncd --session-owner c60d310e-91a6-4f9d-b6d0-ade157b176a5 -N --listen --timeout 120 gluster://localhost:shared-geo" returned with 127
13:33 thoht_ why is it trying /nonexistent/gsyncd ?
13:33 kotreshhr thoht_: that basically means on slave /usr/libexec/glusterfs/gsyncd, it is not finding.
13:33 thoht_ let me check the home ls -la /usr/libexec/glusterfs/gsyncd
13:33 thoht_ -rwxr-xr-x 1 root root 19064 Oct  1 14:59 /usr/libexec/glusterfs/gsyncd
13:34 thoht_ it is good now with the symlink on folder level
13:34 thoht_ but still nonexistent
13:35 kotreshhr thoht_: ok run this command "ssh -i /var/lib/glusterd/geo-replication/secret.pem root@node3"
13:35 kotreshhr thoht_: tell me can it run gsyncd?
13:36 thoht_ i logged me on node3
13:36 thoht_ it logged me on node3
13:37 kotreshhr thoht_: it should run gsyncd as command=/usr/libexec/glusterfs/gsyncd
13:37 thoht_ ls -la humm
13:38 thoht_ http://fpaste.org/279534/14449162/
13:38 glusterbot Title: #279534 Fedora Project Pastebin (at fpaste.org)
13:38 thoht_ it just did a ssh session
13:39 skylar joined #gluster
13:39 thoht_ this is the commen_secret.pem.pub : http://fpaste.org/279537/14449163/
13:39 glusterbot Title: #279537 Fedora Project Pastebin (at fpaste.org)
13:39 Bhaskarakiran joined #gluster
13:39 hamiller joined #gluster
13:40 kotreshhr thoht_: confirm once on node3, ~/.ssh/authorized_keys, what does the public key says ?
13:41 thoht_ kotreshhr: on node3,  ~/.ssh/authorized_keys has no command+
13:41 thoht_ +
13:41 thoht_ =
13:41 Philambdo joined #gluster
13:41 kotreshhr thoht_: Oh then the keys are not distributed properly..
13:41 thoht_ should i copy paste it from node1 ?
13:42 mpietersen joined #gluster
13:43 kotreshhr thoht_:either you copy the contents of "common_secret.pem.pub" to ~/.ssh/authorized_keys or do geo-rep... create push-pem force'
13:43 mpietersen joined #gluster
13:45 thoht_ yes
13:45 mpietersen joined #gluster
13:45 kotreshhr thoht_: now start and check
13:46 thoht_ ok now it is saying Faulty for node1 and Initializing for node2
13:46 thoht_ normal ?
13:46 thoht_ and now both are faulty
13:47 thoht_ http://fpaste.org/279538/44491681/
13:47 glusterbot Title: #279538 Fedora Project Pastebin (at fpaste.org)
13:48 thoht_ kotreshhr: http://fpaste.org/279539/14449168/ this is the autorhized_keys of node3
13:48 glusterbot Title: #279539 Fedora Project Pastebin (at fpaste.org)
13:48 thoht_ there are my old value to jump from node1 to node3 without command=
13:48 thoht_ should i remove it ?
13:49 kotreshhr thoht_: yes, remove them
13:51 thoht_ ok now i can see Initializing for both node
13:52 thoht_ and now faulty both
13:52 kotreshhr thoht_: goo, now check logs, are you seeing gluster_shared_storage not mounted error
13:52 kotreshhr thoht_: *good
13:53 thoht_ http://fpaste.org/279541/91717114/
13:53 glusterbot Title: #279541 Fedora Project Pastebin (at fpaste.org)
13:53 thoht_ this is the log
13:54 kotreshhr thoht_: "[2015-10-15 15:51:58.627310] I [monitor(monitor):288:monitor] Monitor: worker(/shared/vm/data) not confirmed in 60 sec, aborting it"
13:54 dmnchild joined #gluster
13:56 thoht_ kotreshhr: on which node ?
13:56 thoht_ what should i mount ?
13:56 kotreshhr thoht_: I need to check on this, this should not happen.
13:57 aravindavk joined #gluster
14:00 kotreshhr thoht_: I will get back to you tomorrow. Please drop a mail to gluster-users@gluster.org with the logs.
14:01 thoht_ should i subscribe to the mailing list at first ?
14:01 thoht_ or i can send an email directly ?
14:01 haomaiwa_ joined #gluster
14:02 dgandhi joined #gluster
14:03 rideh joined #gluster
14:04 kotreshhr thoht_: You can send it directly but to receive all the gluster-users mails, you need to subscribe: https://www.gluster.org/mai​lman/listinfo/gluster-users
14:04 glusterbot Title: Gluster-users Info Page (at www.gluster.org)
14:05 kotreshhr left #gluster
14:11 amye joined #gluster
14:15 sakshi joined #gluster
14:25 ghenry joined #gluster
14:26 volga629 joined #gluster
14:27 volga629 Hello Everyone, if I need reboot the server which one of the node in gluster cluster, which way is correct to mark it offline for reboot ?
14:35 beeradb_ joined #gluster
14:36 jiffin volga629: there is no method explicitly for that, but u can stop gluster services on that node if u want , to make it offline
14:36 volga629 ok thanks
14:38 maveric_amitc_ joined #gluster
14:42 skoduri joined #gluster
14:48 skylar1 joined #gluster
14:50 a_ta joined #gluster
14:51 volga629 joined #gluster
14:54 deniszh joined #gluster
14:55 David_Varghese joined #gluster
14:59 shubhendu_ joined #gluster
15:01 haomaiwang joined #gluster
15:09 CyrilPeponnet joined #gluster
15:14 CyrilPeponnet joined #gluster
15:15 CyrilPeponnet joined #gluster
15:16 CyrilPeponnet joined #gluster
15:18 CyrilPeponnet joined #gluster
15:28 nbalacha joined #gluster
15:30 togdon joined #gluster
15:38 a_ta Hello, I've been trying to get snapshots to work but have run into the following error: E [MSGID: 106078] [glusterd-snapshot.c:1919:glusterd_is_thinp_brick] 0-management: Failed to get thin pool name for device /dev/mapper/vgpool2-lvm2 [Permission denied]
15:38 a_ta Does anyone have any insight on this problem?
15:39 stickyboy joined #gluster
15:40 kotreshhr joined #gluster
15:44 kdhananjay joined #gluster
15:46 squizzi_ joined #gluster
15:52 ira joined #gluster
15:52 JoeJulian a_ta: The only way that I can think of getting an EPERM trying to run lvs would be selinux.
15:55 a_ta Currently I have selinux set to permissive. Should it be disabled instead?
15:56 JoeJulian Well, then it's not that.
15:56 JoeJulian https://github.com/gluster/glusterfs/​blob/release-3.7/xlators/mgmt/gluster​d/src/glusterd-snapshot.c#L1911-L1920
15:56 glusterbot Title: glusterfs/glusterd-snapshot.c at release-3.7 · gluster/glusterfs · GitHub (at github.com)
15:56 Rapture joined #gluster
15:57 JoeJulian That's what it's doing and it's running as root, so I'm not sure how you can not have permissions.
15:58 skylar joined #gluster
15:59 JoeJulian Ah, red herring.
15:59 JoeJulian EPERM is 1. I bet lvm exited with a return value of 1.
16:01 haomaiwa_ joined #gluster
16:02 a_ta What would cause that to happen? I've run the command laid out on line 1911 manually without issue
16:02 JoeJulian check the return code?
16:09 skylar1 joined #gluster
16:14 jwd joined #gluster
16:17 a_ta JoeJulian: hmmm, when I run the command manually I get a return code of 0
16:18 skylar joined #gluster
16:19 calavera joined #gluster
16:26 kotreshhr joined #gluster
16:27 kotreshhr left #gluster
16:36 maveric_amitc_ joined #gluster
16:38 Rapture joined #gluster
16:42 Manikandan joined #gluster
16:45 a_ta joined #gluster
16:47 RayTrace_ joined #gluster
17:01 haomaiwa_ joined #gluster
17:01 dmnchild joined #gluster
17:09 shyam joined #gluster
17:18 hagarth joined #gluster
17:21 ivan_rossi left #gluster
17:24 jamespppp joined #gluster
17:24 jamespppp Hey guys
17:24 jamespppp can i pick someones brains please. I'm trying to find a good storage solution
17:25 jamespppp my requirements are: fault tolerance over 2 sites, and be able to write to a local node and it sync
17:25 jamespppp to provide a local client cache for files that are read all the time
17:25 jobewan joined #gluster
17:25 jamespppp mostly small .wav files
17:26 jamespppp also be simple to administer/fix when things go wrong (which inevitably they do!)
17:26 JoeJulian If it's read-only, I would look at a gluster volume (for administrative writes) and georeplication to your read-only sites.
17:28 jamespppp no
17:28 jamespppp mainly reads
17:28 jamespppp but writes
17:28 jamespppp customers need to write their recording in the first place, then the clients play them back
17:35 RayTrace_ joined #gluster
17:36 JoeJulian If you want something synchronous, available, and performant, you're up against CAP theorum. If I were doing that I think I would probably store them as objects in swift.
17:37 squizzi joined #gluster
17:39 tdasilva JoeJulian: +1
17:41 mhulsman joined #gluster
17:47 jamespppp what i read on the rackspace blog suggests distributed-replicated architecture would fit the bill. am i missing something?
17:52 jamespppp actually distributed-geo-replication seems even better
17:53 JoeJulian It's unidirectional.
17:54 jamespppp it is?
17:54 jamespppp so is rsync!
17:54 jamespppp well then that doesn't fit the bill well at all
17:54 JoeJulian it's rsync with a changelog.
17:55 jamespppp fair enough
17:55 jamespppp maybe a straight csync and nfs setup is easier
17:58 JoeJulian You're trying to create a CDN. That's what swift is.
18:00 mhulsman joined #gluster
18:00 jamespppp ok, im struggling to find info on google...
18:01 jamespppp all i come up with is swiftserve
18:01 jamespppp commercial outfit
18:01 haomaiwa_ joined #gluster
18:02 JoeJulian @lucky swift object store
18:02 glusterbot JoeJulian: https://wiki.openstack.org/wiki/Swift
18:03 tdasilva jamespppp, JoeJulian: http://docs.openstack.org/developer/swift/
18:03 glusterbot Title: Welcome to Swift’s documentation! swift 2.5.1.dev50 documentation (at docs.openstack.org)
18:05 jamespppp thank you!
18:05 jamespppp is there any option to make it appear as a standard FS mount?
18:06 jamespppp or is is all api driven? it's like amazons S3 service...
18:07 julim joined #gluster
18:08 tdasilva Swift is all REST api
18:09 JoeJulian Are you trying to make pre-packaged software do this, or is it your own system?
18:09 jamespppp our own system
18:09 JoeJulian Ok, then yeah, use the api. You'll be much happier in the long run.
18:10 jamespppp yes i see. So in reality i'd need to retrieve the file object with the API to reference it locally...
18:10 jamespppp we're using Asterisk (open source PBX) on a large scale
18:10 jamespppp i need to 'distribute' the media
18:10 jamespppp to get Asterisk to reference the media, it needs to be local
18:11 jamespppp so we add a delay if we use something like Swift as effectively it's a D/L then a play
18:11 * JoeJulian would use freeswitch
18:13 JoeJulian A freeswitch plugin to playback directly from swift would be really easy. Not at all sure about asterisk. I used it for about a day before I realized its deficiencies.
18:14 jamespppp :quit
18:15 JoeJulian was it something I said?
18:17 neofob joined #gluster
18:22 a_ta joined #gluster
18:22 mhulsman joined #gluster
18:30 ctria joined #gluster
18:40 a2 joined #gluster
18:47 mhulsman joined #gluster
19:00 chirino joined #gluster
19:01 dlambrig_ joined #gluster
19:02 jwaibel joined #gluster
19:19 dmnchild joined #gluster
19:20 unicky joined #gluster
19:20 togdon joined #gluster
19:24 squizzi_ joined #gluster
19:25 squizzi joined #gluster
19:30 calavera joined #gluster
19:33 Philambdo joined #gluster
19:37 maveric_amitc_ joined #gluster
19:38 nzero joined #gluster
19:41 RayTrace_ joined #gluster
19:48 adamaN joined #gluster
20:03 papamoose1 joined #gluster
20:08 togdon joined #gluster
20:10 DV joined #gluster
20:16 ira joined #gluster
20:19 jwd joined #gluster
20:22 hagarth joined #gluster
20:35 neofob left #gluster
20:54 steveeJ joined #gluster
20:55 steveeJ hey, I'm wondering if glusterfs can be used to synchronize access to a shared storage device
20:55 plarsen joined #gluster
20:56 steveeJ all machines are attached to a fibrechannel storage, which they can access at the same time. I'm trying to migrate away from clvm and looking for alternatives
20:57 JoeJulian glusterfs *is* a shared storage device.
20:58 steveeJ I do understand that part, I was just wondering if it could fit my use case too
20:58 JoeJulian Sure
20:58 steveeJ NICs must not be used for traffic
20:58 JoeJulian Just make a volume out of it, mount your volume, and use that.
20:58 steveeJ just synchronization
21:01 JoeJulian Sure, use rdma.
21:05 steveeJ I'm still not sure on how to setup the brick. AFAIU the brick is really a filesystem (like xfs). I can't mount that brick on every host, but that's exactly but I want to do
21:05 steveeJ better said, that's what I think I should be able to do with glusterfs, just instead of xfs
21:07 ayma joined #gluster
21:08 JoeJulian You are correct. You have to have a filesystem on your device. That filesystem is a brick. Gluster would share that brick as [part of] a volume. You would mount that volume on your clients. Gluster does have rdma support.
21:11 steveeJ I'm not sure if we're talking about the same use-case
21:11 ayma joined #gluster
21:14 JoeJulian You're not going to use the filesystem on your appliance from your clients if you use gluster. You have to use gluster for that.
21:21 steveeJ I get that. I'm trying to find out how the bricks on the different servers could use the *same* block device which they have connected via fibrechannel
21:36 JoeJulian The only way you could do that would be to have your appliance present different luns. Otherwise you're just going to have one server with one brick.
21:37 calavera joined #gluster
21:39 steveeJ alright, that's what I thought. but different luns is an option I'm trying to avoid. thanks for the input
21:40 stickyboy joined #gluster
21:49 lpabon joined #gluster
21:57 haomaiwang joined #gluster
21:58 nzero joined #gluster
22:08 DV__ joined #gluster
22:11 calavera joined #gluster
22:35 dgandhi joined #gluster
22:39 shyam joined #gluster
22:47 plarsen joined #gluster
23:03 nzero joined #gluster
23:04 gildub joined #gluster
23:15 a2 joined #gluster
23:30 suliba_ joined #gluster
23:30 clutchk joined #gluster
23:30 Rapture_ joined #gluster
23:30 msvbhat_ joined #gluster
23:30 cvstealth joined #gluster
23:30 csim joined #gluster
23:31 owlbot` joined #gluster
23:34 suliba joined #gluster
23:34 amye joined #gluster
23:35 delhage joined #gluster
23:36 Rydekull joined #gluster
23:39 Telsin joined #gluster
23:39 dastar joined #gluster
23:40 deni joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary