Follow

How to "Delete a tenant and its data from the Interset Cluster"

 

Question

How do I deleted a tenant and its respective data from the Interset Cluster?

Summary

This "How to" article provides the steps on how to delete a tenant and its data from an existing Interset Cluster. The steps high level steps are listed below:

  • Step 1: Stop Flume Service in Ambari
  • Step 2: Delete tenant configuration files
  • Step 3: Delete tenant data from HBase
  • Step 4: Delete tenant data from Elasticsearch
  • Step 5: Delete tenant and admin(s) from Interset Cluster
  • Step 6: Delete Kafka Topic(s) and Consumer Groups
  • Step 7: Delete Ingest directory (Optional)
  • Step 8: Execute HBase compaction
  • Step 9: Remove deleted tenant config from Flume

The following nodes will need to be accessed (via web or ssh to the respective nodes) in order to delete a tenant:

  • AMBARI (web)
  • ANALYTICS (ssh)
  • STREAM (ssh)

WARNING: USE THE STEPS IN THIS ARTICLE UNDER THE GUIDANCE OF INTERSET SUPPORT. PERFORMING THE STEPS BELOW WILL REMOVE ALL DATA AND THE DESIRED TENANT FROM THE INTERSET CLUSTER.

Steps

Step 1: Stop Flume service in Ambari

  1. Open up a web browser and navigate to the Ambari URL and log in as the Ambari Admin User.
    • NOTE: The default Ambari Admin username/password is:
      • username: admin
      • password: admin
  2. In the Ambari UI, on the left side, click on Flume
  3. In the Flume, click on the Service Actions dropdown and select Stop
  4. Once Flume has stopped, in the pop up, click OK

 Step 2: Delete tenant configuration files

  1. SSH to the ANALYTICS NODE as the Interset User
    • EXAMPLE:
      • ssh interset@<ANALYTICS_NODE_FQDN>
  2. Type in the following command to delete the analytics config file for desired tenant from the /opt/interset/analytics/conf directory:
    • rm -f /opt/interset/analytics/conf/<tenant_interset_conf>
  3. Type in the following command to kill Workflow for the desired tenant:
    • /opt/interset/rules/bin/workflow.sh --kill /opt/interset/rules/conf/<tenant_rules_conf>
  4. Type in the following command to delete the rules config file for the desired tenant from the /opt/interset/rules/conf directory:
    • rm -f /opt/interset/rules/conf/<tenant_rules_conf>

Step 3:  Delete tenant data from HBase

  1. SSH to the ANALYTICS NODE as the Interset User
    • EXAMPLE:
      • ssh interset@<ANALYTICS_NODE_FQDN>
  2. Type in the following command to change into the /opt/interset/analytics/bin directory:
    • cd /opt/interset/analytics/bin
  3. Type in the following command to purge data from HBase for the desired tenant. Please modify the variables accordingly:
    • ./sql.sh --action clean --tenantID <tid> --dbServer <HBase_FQDN> --force true
  4. Once the command above completes, type in the following command to launch the phoenix-sqlline:
    • phoenix-sqlline
  5. In the phoenix-sqlline prompt, type in the following command to determine the number of rows stored in the OBSERVED_ENTITY_RELATION_MINUTELY_COUNTS table for the tenant. Please modify the variable(s) accordingly:
    • SELECT COUNT(*) FROM OBSERVED_ENTITY_RELATION_MINUTELY_COUNTS WHERE TID = '<tid>';
  6. In the phoenix-sqlline prompt, type in the following command to delete tenant data from the OBSERVED_ENTITY_RELATION_MINUTELY_COUNTS table. Please modify the variable(s) accordingly:
    • DELETE FROM OBSERVED_ENTITY_RELATION_MINUTELY_COUNTS WHERE TID = '<tid>';
      • NOTE: Depending on the amount of data that resides in OBSERVED_ENTITY_RELATION_MINUTELY_COUNTS, the command above may time out. If it time's out, please repeat steps 4-6 until all rows are removed from the table.

Step 4: Delete tenant data from Elasticsearch

  1. SSH to the ANALYTICS NODE as the Interset User.
    • EXAMPLE:
      • ssh interset@<ANALYTICS_NODE_FQDN>
  2. Type in the following command to delete the Elasticsearch indices for the desired tenant. Please modify the variables accordingly:
    • for i in $(curl -ks -X GET http<s>://<SEARCH_NODE_FQDN>:9200/_cat/indices | grep -o ".*_<tid>[-_].*" | awk '{print $3}'); do curl -ks -X DELETE http<s>://<SEARCH_NODE_FQDN>:9200/$i; done  

Step 5: Delete tenant and admin(s) from Interset

  1. SSH to the ANALYTICS NODE as the Interset User
    • EXAMPLE:
      • ssh interset@<ANALYTICS_NODE_FQDN>
  2. Type in the following command to generate an access_token from the Interset API for the admin user:
    • curl -ks -X POST -d '{"username":"<admin_username>", "password":"<admin_password>"}' -H "Content-Type: application/json" http<s>://<Reporting_Node_FQDN>/api/actions/login
      • NOTE: If admin login has not been modified, the default admin login is:
        • username: root
        • password: root
  3. Copy the access_token output value as it will be required to delete admin(s) and tenant from the system.
  4. Type in the following command to delete user(s)/admin(s) from desired tenant. Please modify the variable(s) accordingly:
    • curl -ks -X DELETE -H "Content-Type: application/json" -H "Authorization: Bearer <access_token>" http<s>://<Reporting_Node_FQDN>/api/tenants/<tid>/users/<username>
  5. Rinse and repeat step above delete any subsequent user/admin from desired tenant
  6. Type in the following command to delete the desired tenant from the Interset Cluster. Please modify the variable(s) accordingly:
    • curl -ks -X DELETE -H "Content-Type: application/json" -H "Authorization: Bearer <access_token>” http<s>://<Reporting_Node_FQDN>/api/tenants/<tid>

Step 6: Delete Kafka Topic(s) and Consumer Groups

  1. SSH to the STREAM NODE as the Interset User.
    • EXAMPLE:
      • ssh interset@<STREAM_NODE_FQDN>
  2. Type in the following command to navigate to the following directory:
    • cd /usr/hdp/current/kafka-broker/bin
  3. Type in the following command to list the current Kafka Topic(s). Please modify the variable(s) accordingly:
    • ./kafka-topics.sh --zookeeper <MASTER_NODE_FQDN>:2181 --list
  4. Locate the Kafka Topic(s) for the desired tenant that is to be deleted:
    • EXAMPLE:
      • "interset_<data_source>_events_<DID>_<TID>"
      • AND/OR "interset_<data_source>_raw_csv_<DID>_<TID>"
  5. Once the topic(s) have been identified, type in the following command to delete the Kafka Topic(s). Please modify the variable(s) accordingly:
    • ./kafka-topics.sh --zookeeper <MASTER_NODE_FQDN>:2181 --delete --topic <kafka_topic_name>
  6. Rinse and repeat the command above to delete all identified Kafka Topic(s) for the desired tenant.
  7. Type in the following command to list the current Kafka Consume Groups. Please modify the variable(s) accordingly:
    • ./kafka-consumer-groups.sh --zookeeper <MASTER_NODE_FQDN>:2181 --list
  8. Locate the Kafka Consumer Group(s) for the desired tenant that is to be deleted:
    • EXAMPLE:
      • "interset_<data_source>_events_<DID>_<TID>_hbase_group"
      • AND/OR "interset_<data_source>_events_<DID>_<TID>_es_group"
      • AND/OR "interset_violations_<TID>_es_group"
      • AND/OR "interset_<data_source>_raw_csv_<DID>_<TID>_transform_group"
      • AND/OR "interset_violations_<TID>_phoenix_group"
  9. Once the Consumer Group(s) have been identified, type in the following command to delete the desired Consumer Group(s). Please modify the variable(s) accordingly
    • ./kafka-consumer-groups.sh --zookeeper <MASTER_NODE_FQDN>:2181 --delete --group <kafka_consume_group_name> 
  10. Rinse and repeat the command above to delete all identified Kafka Consumer Group(s)

Step 7: Delete data ingest directory (Optional)

  1. SSH to the STREAM NODE as the Interset User.
    • EXAMPLE:
      • ssh interset@<STREAM_NODE_FQDN>
  2. Delete any ingest directory and its respective data for the desired tenant that is/was deleted.

Step 8: Run HBase compaction

  1. SSH to the ANALYTICS NODE as the Interset User.
    • EXAMPLE:
      • ssh interset@<ANALYTICS_NODE_FQDN>
  2. Type in the following command to navigate to the following directory:
    • cd /opt/interset/analytics/bin
  3. Type in the following command to run the compaction script on HBase:
    • ./compaction.sh

Step 9: Remove Flume config for deleted tenant

  1. Open up a web browser and navigate to the Ambari URL and log in as the Ambari Admin User.
    • NOTE: The default Ambari Admin username/password is:
      • username: admin
      • password: admin
  2. In the Ambari UI, on the left side, click on Flume
  3. In the Flume, click on the Configs tab
  4. Next, click the Group dropdown menu, and select Ingest
    • NOTE: The group name may differ. In this case, please select the group that the stream node(s) are associated with.
  5. In the flume.conf section, update the config by removing the sections for the deleted tenant.
  6. Once completed, click the Save button. Add a comment if desired, then click Save, and then click OK
  7. Click the Restart button and select Restart All Affected.
  8. Click Confirm Restart All
  9. Wait till Restart of Flume is successful. Once it is restarted, click OK

Applies To

  • Interset 5.4.x or higher
Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments