Blog

  • test-frame

    TestFrame v0.1.0

    The objective is just to provide a minimalist and simple frame to validate a HTTP/S API like REST based or GraphQL based API.

    It should stay explicit, easy to use and to customize.

    Deps

    You may have to dispose of Python because of “commander” package, will disapear.

    {
      "axios": "^0.19.2",
      "colors": "^1.4.0",
      "commander": "^5.1.0",
      "sleep": "^6.2.0"
    }

    Usage

    Install this package and include it into any js file :

    const ApiTest = require('../../index');
    
    // Package come with a Check module that has predefined functions to check common types.
    const Check = ApiTest.check;
    
    // Send GET message to /hello and check the returned value is a string and strictly 'hello'.
    ApiTest.get(
      '/hello'
      , (v) =>
          Check.typ.string(v)
          && v === 'hello'
      , 'Get hello message from server'
      , 'STANDARD API'
    );
    
    // You can define custom common functions by overriding Check ones.
    Check.success = v => !!v['result'] && v.result === 'success';
    
    // Send POST message with { name: 'foo_0' } payload and check return according Check.success.
    ApiTest.post(
        `/post/user`
        , { name: 'foo_0' }
        , (v) => Check.success(v)
        , `Post foo_0 user`
        , 'USER API MANAGEMENT'
    );
    
    
    // Send POST message with a bad payload and check that it does lead to a managed error.
    Check.error = v => !!v['result'] && v.result === 'failed';
    ApiTest.post(
        `/post/user`
        , { bad_value: 'bad_content' }
        , (v) => Check.error(v)
        , `Post a bad payload to add user`
        , 'USER API MANAGEMENT'
    );
    
    // You have access to 'testFor' function calling the s => { ... } function for all [ 'foo_1', ... ] list elements.
    ApiTest.testFor(['foo_1', 'foo_2', 'foo_3'], s => {
      ApiTest.post(
        `/post/user`
        , { name: s }
        , (v) => Check.success(v)
        , `Post ${s} user`
        , 'USER API MANAGEMENT'
      );
    });
    
    // When every test is declared use ApiTest.run() function to run everything.
    ApiTest.run();

    Then run tests using node interpreter

    $ node tests/hello/hello.js --help
    
    Usage: hello [options] <url>
    
    Options:
      -ss, --selfSigned <bool>              Accept Self Signed Certificate
      -ucao, --useAuthorityOnly <bool>      Use only authority file
      -ca, --authority <path>               Path to certificate authority file
      -cert, --certificate <path>           Path to certificate file
      -pk, --privateKey <path>              Path to the associate private key file
      -up12, --usePkcs12 <bool>             Use P12 file
      -p12, --pkcs12 <path>                 Path to the associate P12 file
      -p12pw, --pkcs12Password <string>     Password associated with the P12 file
      -https, --useHttps <bool>             Should use HTTPS instead of HTTP
      -vr, --verboseResponse <bool>         Should print server responses
      -vr, --verboseError <bool>            Should print server errors responses
      -a, --await <[1; 360000] as integer>  Set an await time between requests
      -m, --matching <string>               Execute tests matching given string
      -pn, --productName <string>           Execute parametered product named tests
      -h, --help                            display help for command
    
    $ node tests/hello/hello.js http://127.0.0.1:4200/
    

    Visit original content creator repository
    https://github.com/aeghost/test-frame

  • file-digests

    File-digests

    An utility to check if there are any changes in your files.

    If you move your files around, perhaps across platforms, maybe through unreliable connections, or with the tools you don’t quite trust, that utility will help you to verify the result and spot possible errors.

    It will also help you to find issues with your storage: silent data corruption, bitflips/bitrot, bad blocks, other hardware or software faults.

    You could use this utility to verify the restore process for your backups.

    It can show a list of duplicate files as well.

    It is a CLI utility, written on Ruby. It’s cross-platform (Linux/UNIX/Windows/macOS). It’s tested to run on Ubuntu 20.04, Windows 10, and macOS Catalina.

    How it works

    • Given the directory, it calculates digests/hashes (BLAKE2b512, SHA3-256, or SHA512-256) for the files it contains.
    • Those digests are then kept in a SQLite database. By default, the database is stored in the same directory with the rest of the files. You could specify any other location.
    • Any time later you run the tool again.
      • It will check if any file becomes corrupted or missed due to hardware/software issues.
      • It will store digests of updated files. It is assumed that if a particular file has both mtime and digest changed then it’s a sign of a legitimate update, and not of a storage fault.
      • New files will be added to a database.
      • Renames will be tracked.
      • If files are missed from your storage then the tool will ask for your confirmation to remove information on those files from the database.

    Digest algorithms

    • You could change the digest algorithm at any time. Transition to a new algorithm will only occur if all files pass the check by digests which were stored using the old one.
    • Faster algorithms like KangarooTwelve and BLAKE3 may be added as soon as fast implementations will be available in Ruby.

    Install

    Windows

    Please install Ruby first.

    gem install file-digests

    Linux/macOS

    sudo gem install file-digests

    Usage

    Usage: file-digests [options] [path/to/directory] [path/to/database_file]
           By default the current directory will be operated upon, and the database file will be placed to the current directory as well.
           Should you wish to check current directory but place the database elsewhere, you could provide "." as a first argument, and the path to a database_file as a second.
        -a, --auto                       Do not ask for any confirmation.
        -d, --digest DIGEST              Select a digest algorithm to use. Default is "BLAKE2b512".
                                         You might also consider to use slower "SHA512-256" or even more slower "SHA3-256".
                                         Digest algorithm should be one of the following: BLAKE2b512, SHA3-256, SHA512-256.
                                         You only need to specify an algorithm on the first run, your choice will be saved to a database.
                                         Any time later you could specify a new algorithm to change the current one.
                                         Transition to a new algorithm will only occur if all files pass the check by digests which were stored using the old one.
        -f, --accept-fate                Accept the current state of files that are likely damaged and update their digest data.
        -h, --help                       Prints this help.
        -p, --duplicates                 Show the list of duplicate files, based on the information out of the database.
        -q, --quiet                      Less verbose output, stil report any found issues.
        -t, --test                       Perform a test to verify directory contents.
                                         Compare actual files with the stored digests, check if any files are missing.
                                         Digest database will not be modified.
        -v, --verbose                    More verbose output.
    

    Contributing

    Please see our Contributing Guidelines for details.

    Visit original content creator repository
    https://github.com/senotrusov/file-digests

  • Vida-exercise

    Vida front end exercise

    A brief story of this web application

    This is the very first web app that I had ever did in AngularJS.

    It was done during my 7 and a half long interview with a company I worked with, the 9th March 2017. It took me about 4 hours and a half to complete it. I did it that day when they let me do it at their offices after a quick introduction of the company by the CTO and a brief of my previous job experiencies.

    Basically during that hours I studied quickly the very basic AngularJS statements to complete the application, that for these resons it is not very standardized.

    It was later reviewed with the CTO, whom I explained the decisions that I made in terms of coding style and layouts.

    The story continues because I got the job!

    This exercise was the exercise that gave me the job and the opportunity to improve further my knowledge with AngularJS!

    In this company I had the opportunity of building from scratch a new platform using more reasonable AngularJS stardards that are commonly used.

    Indeed, in this exercise, there is no coding style guide, neither a folder structure and neither other AngularJS common standards (remember that this was my first ever single page app done using Angular).

    All the functionalities for this app are inside one controller that consumes Vida’s APIs.

    The flux is pretty much all procedural that start with a very basic login that, if successful, shows a list of clients in a HTML table. With the More Details links, all the client’s details can be seen.

    Build the app to make it work

    • Run npm install or another package manager like yarn.
    • Then run gulp to build the code: this will create the dist folder.
    • Start the server with node server.
    • Open localhost:9010 to see the app running.
    Authorization credentials

    User: codingtest@vida.co.uk
    Password: CodingExercise

    Note: Vida APIs are no more working so the credetials are now wrong. As I left the company a couple of years ago, I cannot provide any new ones.

    Visit original content creator repository
    https://github.com/Ferie/Vida-exercise

  • puppet-selinux

    SELinux module for Puppet

    Build Status Code Coverage Puppet Forge Puppet Forge - downloads Puppet Forge - endorsement Puppet Forge - scores

    Table of Contents

    1. Overview
    2. Module Description – What the module does and why it is useful
    3. Usage – Configuration options and additional functionality
    4. Reference – An under-the-hood peek at what the module is doing and how
    5. Defined Types
    6. Development – Guide for contributing to the module
    7. Authors

    Overview

    This class manages SELinux.

    Requirements

    • See metadata.json

    Module Description

    This module will configure SELinux and/or deploy SELinux based modules to running system.

    Get in touch

    Upgrading from puppet-selinux 0.8.x

    • Previously, module building always used the refpolicy framework. The default module builder is now ‘simple’, which uses only checkmodule. Not all features are supported with this builder.

      To build modules using the refpolicy framework like previous versions did, specify the ‘refpolicy’ builder either explicitly per module or globally via the main class

    • The interfaces to the various helper manifests has been changed to be more in line with Puppet file resource naming conventions.

      You will need to update your manifests to use the new parameter names.

    • The selinux::restorecond manifest to manage the restorecond service no longer exists

    Known problems / limitations

    • The selinux_python_command fact is now deprecated and will be removed in version 4 of the module.
    • If SELinux is disabled and you want to switch to permissive or enforcing you are required to reboot the system (limitation of SELinux). The module won’t do this for you.
    • If SELinux is disabled and the user wants enforcing mode, the module will downgrade to permissive mode instead to avoid transitioning directly from disabled to enforcing state after a reboot and potentially breaking the system. The user will receive a warning when this happens,
    • If you add filecontexts with semanage fcontext (what selinux::fcontext does) the order is important. If you add /my/folder before /my/folder/subfolder only /my/folder will match (limitation of SELinux). There is no such limitation to file-contexts defined in SELinux modules. (GH-121)
    • If you try to remove a built-in permissive type, the operation will appear to succeed but will actually have no effect, making your puppet runs non-idempotent.
    • The selinux_port provider may misbehave if the title does not correspond to the format it expects. Users should use the selinux::port define instead except when purging resources
    • Defining port ranges that overlap with existing ranges is currently not detected, and will cause semanage to error when the resource is applied.
    • On Debian systems, the defined types fcontext, permissive, and port do not work because of PA-2985.

    Usage

    Generated puppet strings documentation with examples is available in the REFERENCE.md

    It’s also included in the docs/ folder as simple html pages.

    Reference

    Basic usage

    include selinux

    This will include the module and allow you to use the provided defined types, but will not modify existing SELinux settings on the system.

    More advanced usage

    class { selinux:
      mode => 'enforcing',
      type => 'targeted',
    }

    This will include the module and manage the SELinux mode (possible values are enforcing, permissive, and disabled) and enforcement type (possible values are targeted, minimum, and mls).

    Note on SELinux mode changes

    Changing SELinux between enforcing/permissive and disabled requires a reboot to take effect.

    When transitioning from disabled to enforcing:

    1. The module sets SELinux to permissive, which requires a reboot to take effect.
    2. After the reboot, the module updates the configuration and running state to enforcing.

    When transitioning from enforcing to disabled:

    1. The module sets SELinux to disabled, which requires a reboot to take effect, and sets the running state to permissive until then.
    2. After the reboot, SELinux will be fully disabled.

    Deploy a custom module using the refpolicy framework

    selinux::module { 'resnet-puppet':
      ensure    => 'present',
      source_te => 'puppet:///modules/site_puppet/site-puppet.te',
      source_fc => 'puppet:///modules/site_puppet/site-puppet.fc',
      source_if => 'puppet:///modules/site_puppet/site-puppet.if',
      builder   => 'refpolicy'
    }

    Using pre-compiled policy packages

    selinux::module { 'resnet-puppet':
      ensure    => 'present',
      source_pp => 'puppet:///modules/site_puppet/site-puppet.pp',
    }

    Note that pre-compiled policy packages may not work reliably across all RHEL / CentOS releases. It’s up to you as the user to test that your packages load properly.

    Set a boolean value

    selinux::boolean { 'puppetagent_manage_all_files': }

    Defined Types

    • boolean – Set seboolean values
    • fcontext – Define fcontext types and equals values
    • module – Manage an SELinux module
    • permissive – Set a context to permissive.
    • port – Set selinux port context policies

    Development

    Things to remember

    • The SELinux tools behave odd when SELinux is disabled
      • semanage requires --noreload while in disabled mode when adding or changing something
      • Only few --list operations work
    • run acceptance tests: ./test-acceptance-with-vagrant

    Facter facts

    The fact values might be unexpected while in disabled mode. One could expect the config_mode to be set, but only the boolean enabled is set.

    The most important facts:

    Fact Mode: disabled Mode: permissive Mode: enforcing
    $facts['os']['selinux']['enabled'] false true true
    $facts['os']['selinux']['config_mode'] undef Value of SELINUX in /etc/selinux/config Value of SELINUX in /etc/selinux/config
    $facts['os']['selinux']['current_mode'] undef Value of getenforce downcased Value of getenforce downcased

    Authors

    Visit original content creator repository https://github.com/voxpupuli/puppet-selinux
  • quetzal

    Quetzal logo

    Quetzal — A RESTful API for data and metadata management.

    Quetzal

    Quetzal (short for Quetzalcóatl, the feathered snake), a RESTful API designed to store data files and manage their associated metadata.

    Quetzal is an application that uses Cloud storage providers and non-structured databases to help researchers organize their data and metadata files. Its main feature is to provide a remote, virtually infinite, storage location for researchers’ data, while providing an API to encapsulate data/metadata operations. In other words, researchers and teams can work with large amounts of data that would be too large for local analyses, using Quetzal to simplify the complexity of Cloud resource management.

    Quetzal’s mid-term roadmap is to integrate with large public physiological signal databases like PhysioNet, MIPDB, TUH, among others. Tha main objective is to provide researchers and data scientists a unique bank of file datasets with a unified API to access the data and to encapsulate the heteronegeity of these datasets.

    Features

    There are two scenarios where Quetzal was designed to help:

    • Imagine you want to apply a data processing pipeline to a large dataset. There are several solutions on how to execute and parallelize your code, but where is the data? Moreover, imagine that you want to do a transverse study: How do you manage the different sources? How to download them?

      Quetzal provides a single data source with a simple API that will let you define easily the scope of your study and, with a brief Python code that uses Quetzal client, you will be able to download your dataset.

    • Let’s say that you are preparing a new study implying some data collection protocol. You could define a procedure where the data operators or technicians take care to copy the data files in a disk, Google Drive or Dropbox, along with the notes associated with each session, like subject study identifier, date, age, temperature, etc. Doing this manually would be error-prone. Moreover, the structure of these notes (i.e. the metadata) may evolve quickly, so you either save them as manual notes, text files, or some database that gives you the flexibility to quickly adapt its structre.

      Using the Quetzal API, you automate the upload and safe storage of the study files, associate the metadata of these files while having the liberty to set and modify the metadata structure as you see fit.

    In brief, Quetzal offers the following main features:

    • Storage of data files, based on cloud storage providers, which benefits from all of the features from the provider, such as virtually infinite storage size.
    • Unstructured metadata associated to each file*. Quetzal does not force the user to organize your metadata in a particular way, it lets the user keep whatever structure they prefer.
    • Structured metadata views for metadata exploration or dataset definition. By leveraging Postgres SQL, unstructured metadata can be queried as JSON objects, letting the user express what subset of the data they want to use.
    • Metadata versioning. Changes on metadata are versioned, which is particularly useful to ensure that a dataset are reproducible.
    • Endpoints and operations defined using the OpenAPI v3 specification.

    Documentation

    Quetzal’s documentation is available on readthedocs. The API documentation is embedded into its specification; the best way to visualize it is through the is also a ReDoc API reference documentation site.

    Contribute

    Support

    If you are having issues, please let us know by opening an issue or by sending an email to support@quetz.al.

    License

    The project is under the BSD 3-clause license.

    See the authors page for more information on the authors and copyright holders.

    Visit original content creator repository https://github.com/quetz-al/quetzal
  • Hanacaraka-Recognition-HD-CNN

    Hanacaraka Recognition HD-CNN

    Image recognition for Javanese script using YOLOv4 Darknet and HD-CNN. This project uses YOLOv4 as the object detector, and each detected object will be classified by HD-CNN.

    Environment

    • CentOS Stream release 9
    • CUDA Toolkit 11.8
    • cuDNN 8.6
    • python 3.8.16
    • tensorflow 2.12.0
    • opencv 4.6.0

    Table of Contents

    Usage

    Installation

    git clone https://github.com/jansen062001/Hanacaraka-Recognition-HD-CNN.git

    Training and Testing YOLOv4 Darknet

    1. Preparing The Dataset

      • Download and unzip the augmented dataset with YOLO Darknet format from roboflow: https://universe.roboflow.com/thesis-dicgg/hanacaraka-recognition

        Hanacaraka YOLOv4 Darknet.v14i.darknet
        │   README.dataset.txt
        │   README.roboflow.txt
        │
        └───train
              ...
              9_png.rf.f7a6d330b72103e36cc779f7a2c5d075.jpg
              9_png.rf.f7a6d330b72103e36cc779f7a2c5d075.txt
              _darknet.labels
      • Because our YOLOv4 model will use 416×416 (WxH), so each image in the dataset needs to be sliced into 416×416. Use this Github repo to do this work.
      • Copy all sliced images (train and test) and classes.names into ./yolov4_darknet/dataset/raw/
      • Rename classes.names to classes.txt
      • Run these commands to re-label and move the dataset into ./yolov4_darknet/dataset/processed/
        python -m yolov4_darknet.generate_yolo_dataset --train_size=70 --valid_size=20 --test_size=10
    2. Training

      python -m yolov4_darknet.train

      .weights file from the training process will be placed on ./yolov4_darknet/weights/

    3. Testing

      • Put the image file in the directory ./yolov4_darknet/data/

      • Run these commands

        python -m yolov4_darknet.test --width=416 --height=416 --filename=example.jpg

        There is a file called output_example.jpg in the directory ./yolov4_darknet/data/ as the output

    Training and Testing HD-CNN

    1. Preparing The Dataset

      • Download and unzip the augmented dataset with YOLO Darknet format from roboflow: https://universe.roboflow.com/thesis-dicgg/hanacaraka-recognition

        Hanacaraka YOLOv4 Darknet.v14i.darknet
        │   README.dataset.txt
        │   README.roboflow.txt
        │
        └───train
              ...
              9_png.rf.f7a6d330b72103e36cc779f7a2c5d075.jpg
              9_png.rf.f7a6d330b72103e36cc779f7a2c5d075.txt
              _darknet.labels
      • Copy all files inside the train folder into ./hd_cnn/dataset/raw/
      • Rename _darknet.labels to classes.txt
      • Run these commands to re-label and move the dataset into ./hd_cnn/dataset/processed/
        python -m hd_cnn.generate_hdcnn_dataset --train_size=70 --valid_size=20 --test_size=10
    2. Training

      python -m hd_cnn.train --model=hd_cnn

      .weights file from the training process will be placed on ./hd_cnn/weights/

    3. Testing

      • Put the image file in the directory ./hd_cnn/data/
      • Run these commands
        python -m hd_cnn.test --filename=example.jpg

    Run YOLOv4 + HD-CNN

    • Put the image file in the directory ./
    • Run these commands
      python main.py --filename=example.jpg
    • After the process is complete, there is a file called result.jpg in the directory ./ as the final output

    Acknowledgements

    Expand

    Visit original content creator repository
    https://github.com/jansen062001/Hanacaraka-Recognition-HD-CNN

  • CryptoTracker-Repo

    Crypto-tracker – Your Cryptocurrency Tracker

    Welcome to Crypto-Tracker, your go-to solution for tracking your cryptocurrency market! Crypto-tracker is an open-source web application that empowers you to effortlessly monitor real-time market data, and stay informed about the latest crypto trends.

    Table of Contents

    About Crypto-Tracker

    Crypto-Tracker is a user-friendly and responsive web application developed to provide a comprehensive solution for cryptocurrency enthusiasts. It allows you to:

    • Real-time Market Data: Stay up-to-date with real-time cryptocurrency prices, market capitalization, and trading volume.

    • News Feed: Read the latest news and updates from the cryptocurrency industry, keeping you well-informed.

    • Responsive Design: Enjoy a seamless experience on various devices, from desktops to smartphones.

    • User-friendly Interface: Navigate the application with ease, thanks to an intuitive design.

    Features

    Crypto-Tracker comes equipped with a range of features tailored to cryptocurrency enthusiasts:

    • Real-time Price Updates: Get live updates on cryptocurrency prices, market caps, and trading volumes.

    • News Aggregator: Stay in the loop with the latest cryptocurrency news and developments.

    • Responsive Design: Enjoy a seamless experience on desktops, tablets, and mobile devices.

    Getting Started

    To start using Crypto-tracker, follow these steps:

    1. Clone the Repository: Clone this repository to your local machine.

      git clone https://github.com/Dixittushar/CryptoTracker-Repo.git
    2. Install Dependencies: Navigate to the project directory and install the required dependencies.

      cd crypto-tracker
      npm install
    3. Run the Application: Start the application locally.

      npm start
    4. Access the Application: Open your web browser and visit http://localhost:3000 to use Crypto-Tracker.

    Contributing

    We welcome contributions from the open-source community to improve Crypto-Tracker. If you want to contribute, please follow these steps:

    1. Fork the repository.

    2. Create a new branch for your feature or bug fix.

    3. Make your changes and ensure they are thoroughly tested.

    4. Submit a pull request to the main repository, explaining the purpose of your changes and any relevant details.

    5. Our team will review your pull request, provide feedback, and merge it once it meets the project’s standards.

    Please review our Contribution Guidelines for more information.

    License

    This project is licensed under the MIT License. See the LICENSE file for more details.


    Thank you for choosing Crypto-Tracker to manage your cryptocurrency portfolio. We hope you find this application valuable and user-friendly

    Visit original content creator repository
    https://github.com/Dixittushar/CryptoTracker-Repo

  • osw_keeby

    Samsung Tizen OS Application

    My AI Persnol Trainer, 🔥S-Coach !

    Project : MachineLearning based Samsung Galaxy Gear Healthcare-App

    The 9th OSS(Open Source Software) Grand Developers Challenge 삼성전자 기업제안과제 본선진출 프로젝트

    • ML(Classifier) + Web(Backend, Server) + Smartwatch App(Frontend, Client)
      • 🚀 This repo is part of the full project code.

    URL

    ⭐ URL을 Ctrl + 마우스좌클릭 하면 외부링크로 이동합니다 😀

    내용 URL
    1. 대회소개 https://www.oss.kr/notice/show/6008d9bc-66f0-4373-a9df-19a8973c7038
    2. 시연영상 https://youtu.be/p5vPWqi1B6w
    3. 발표자료 https://www.slideshare.net/SuHyunCho2/sws-56703648#1
    4. 개발문서 https://www.slideshare.net/secret/bsfNKp1uR5Y1q8
    5. 대회 이후 논문작성하여 기록 https://www.slideshare.net/SuHyunCho2/recognition-of-anaerobic-based-on-machine-learning-using-smart-watch-sensor-data ([paper site1] / [paper site2])


    S-coach

    AI personal trainer App based on Machine Learning using Samsung tizen smart watch.

    Note

    • 삼성 타이젠 OS 기반의 Gear(Gear2 , Gear S, Gear S2) 애플리케이션
    • tizen-sdk-2.3.1
    • Device Optimization completed on samsung smartwatch version(Samsung Galaxy Gear 2, Gear S, Gear S2)
    • Tizen app type: Companion(Operating with Samsung Galaxy S4(android 4.4))

    Project Summary

    머신러닝 기반 인공지능 피트니스 헬스 코치 어플리케이션.
    3축가속도센서 3축자이로스코프센서를 기반으로 사용자의 모션을 실시간 트래킹 및 스케쥴링하여 ‘무슨운동’을 ‘몇회’했는지 그리고 ‘칼로리 소모량’까지 스스로 알아서 판단하고 기록하여 관리.
    또한 사용자가 운동을 시작한 후 심박수를 예상하여 운동을 촉진할수있도록 사용자의 예상된 평균심장박동수와 가장 비슷한 BPM에 해당되는 음악을 스스로 찾고 알아서 재생.

    About Train Models(optimized)

    • Performance(Accuracy): about 96.7% for unseen data [2016. 10]
    • Model type: Discriminative Model ( P ( y | X ) ) for inference
    • Learning Type: Classification on Supervised Learning.
    • Using Dimension Reduction Skills e.g. PCA, LDA(Fisher’s LDA)
    • Using Kernel Tricks e.g. linear and rbf
    • Hybrid Stacking Model based on SVM(Support Vector Machine) Framework and others

    About Project Enviroments

    • client side
      • python 3.4 / 2.7
      • tizen 2.3.1
      • java 8
      • android 4.4
      • windows 7
    • server side
      • ubuntu 14
      • AWS EC2 free tier
      • flask 0.9
      • nginx 1.4.6
      • mariadb 5.5.44
      • uwsgi 1.9.17.1
      • sqlalchemy 0.15

    Reference

    Visit original content creator repository
    https://github.com/humblem2/osw_keeby

  • Community Warning

    Community Warning

    Some outfit called “Yumpu” has stolen an earlier version of my work on this code, and is trying to make money off of it

    Why I Believe Yumpu Are Thieves

    These guys claim to be a self-publishing website. However, I have never given them permission to publish my work. In fact, I have no relationship at all with them. From what I can tell, they took old, publically-available versions of my work and now offer them for sale.

    As far as I am concerned, they are thieves. I implore anyone reading this to avoid doing business with them.

    thisoldtoolbox-edirectory

    Please, please, please… read this entire page BEFORE trying to use this repo.

    This repo is part of my This Old Toolbox set of repos, which collectively host various system administration/management tools I’ve written over the years, for a variety of platforms and environments. I’m making them public so that others might find the ideas and/or the accumulated knowledge helpful to whatever they need to create.

    History of this specific tool

    In early 2005, I was working in a NetWare environment, and there was a business need to edit almost a thousand eDirectory User objects to find and remove specific bits of information. I was tasked with engineering a solution that didn’t involve having the admins doing it by hand.

    My solution leveraged the fact that the OS shipped with the Perl v5.8 interpreter and a set of Novell-supplied Perl modules (Universal Component Services, or UCS) that enabled interaction with NetWare and eDirectory.

    About eDirectory

    eDirectory is a multi-platform directory service, one that is a lot closer to the X.500 ideals than just about anything else. It started on Novell NetWare, but was subsequently ported to other platforms, including Linux.

    IIRC, my tool was written to operate in an eDirectory v8.7 environment.

    Perl on NetWare

    Before you dive into my code, know that Perl on NetWare had a number of peculiarities that could trip up even an experienced Perl programmer. The Perl on NetWare NDK has a complete listing; however, some items of specific interest here include:

    • It is mandatory to use lexically scoped variables (with help of the my() operator) whenever possible
    • A script that introduces infinite loop cannot be terminated (and I can verify this from experience!)
    • The Perl debugger restart option is not supported

    There are additional peculiarities of Perl on NetWare, as it specifically relates to the Universal Component APIs:

    • If you have a require or a use statement that loads the UCS API module, but you never instantiate any object provided by that module in your code, the server may ABEND
    • The use strict; statement is problematic, because the UCS APIs seem to perform some redefinitions that the directive doesn’t like; however, the use warnings; statement works, as do the -c and -w parameters on the Perl invocation
    • Novell did not publish separate Perl-oriented UCS documentation; refer to the Novell Script for NetWare (NSN) UCS documentation; the Objects, Properties and Methods are the same, only the language syntax differs (specifically, reference the Novell Developer Kit (NDK) NSN Components (Parts One and Two), as well as the NDK Novell eDirectory Schema Reference)
    • eDirectory error codes are not available to the Perl environment; many UCS methods return, at best, only Boolean (yes/no, TRUE/FALSE, OK/FAIL) values, and you will have to use DSTRACE.NLM to capture eDirectory error information

    NetWare and eDirectory introduce some additional wrinkles in terms of syntax:

    • Paths to server-local files are referenced using the syntax VOLUME:/PATH/TO/FILE
    • eDirectory context references are in the form nds:\TREE\TOP O\OU
    • Since “\” is a special character in Perl, it must be escaped, and so should be represented in your code as “\\”
    Adding Perl Modules to NetWare?

    The Perl community has impressive libraries of add-on Perl modules (such as CPAN, the Comprehensive Perl Archive Network). However, adding the typical Perl module to NetWare’s Perl installation is a non-trivial exercise, involving either establishing a NetWare development environment, or cross-compiling on another platform. Neither choice is for the inexperienced or faint-of-heart. For the vast majority of admins, you’re limited to whatever Perl modules are included in the NetWare distribution.

    Understanding eDirectory Objects and Attributes

    In order to leverage my code, it is first necessary to have a firm grasp of eDirectory, and in particular Objects and their Attributes.

    A quick overview of basic eDirectory Object concepts

    Fundamentally, an Object a collection of groups of data of various types. Users are Objects. Printers are Objects. Groups are Objects.

    Objects have Attributes. The specific Attributes of an Object are defined by its type; that is, a User Object consists of a different set of Attributes than
    a Group Object. A particular Attribute (for example, the Full Name) might appear in many different Object types.

    Attributes have a Value; that is, the data that the Attribute contains. Some Attributes are Multi-Valued Attributes (MVAs) and may contain more than one Value; the
    membership list of a Group is a common example.

    In the eDirectory world, the Schema defines (among other things) the available Attributes, the Attributes used by the various Objects (this is also called Layouts),
    the data type(s) of the Values (which is known as Syntax), and the Values associated with the various Attributes.

    Unlike AD, in eDirectory an OU really is an OU – it is a “container” (as envisioned in X.500) that contains other Objects in an actual 3-dimensional data representation. The Context of an Object is important; the namespace is not flat.

    Understanding these inter-relationships, and the hierarchical nature of eDirectory, is important to understanding how to access and safely manipulate the eDirectory Tree when using a direct tool such as the UCS API.

    My code confines itself to using UCS to access User Objects; however, that is an artificial limitation. Many other Object types exist and are accessible via the UCS APIs,
    and the APIs provide many other methods beyond those presented in my code.

    edir_tool.pl is a template!

    While the edir_tool.pl file is based on code I actually ran in a production environment, here I would call it proof-of-concept. You can probably take it and, with a few minor tweaks, get it to run in an appropriate NetWare environment; but it’s almost 20 years old, and I didn’t keep up with NetWare after v6.5.

    The variables you’d probably need to adjust include (at minimum):

    • $Tree
    • $Top_O
    • $OU
    • $login_context
    • $ServerIP

    Keep in mind that, as written, even when it is working flawlessly, it doesn’t do anything except login to eDirectory, perform a simple search, and write a simple log file.

    Additional Perl Code

    Now that you understand the basics, to get useful work done, you need to manipulate Objects. This repo has a number of code examples that show how to approach various operations. These files are not stand-alone code; they must be integrated into a larger program like edir_tool.pl.

    create_object.pl

    Demonstrates the basics of creating a new Object in an eDirectory Tree.

    delete_object.pl

    An example of deleting an existing eDirectory Object from the eDirectory Tree.

    enumerate.pl

    This code snippet gives you some tools to explore the Schema, by listing the Layouts and Attributes of the Objects defined in the eDirectory Tree. Know before you go.

    find_object.pl

    In this file, I provide code to find specific eDirectory Objects, without using the Search method, or by using the Item method to look in the results of a search.

    get_object.pl

    Provides an example of reading Values from the Attributes of an Object.

    get_properties

    FORTHCOMING

    modify_object.pl

    FORTHCOMING

    Visit original content creator repository
    https://github.com/QuantumTux/thisoldtoolbox-edirectory