Install this package and include it into any js file :
constApiTest=require('../../index');// Package come with a Check module that has predefined functions to check common types.constCheck=ApiTest.check;// Send GET message to /hello and check the returned value is a string and strictly 'hello'.ApiTest.get('/hello',(v)=>Check.typ.string(v)&&v==='hello','Get hello message from server','STANDARD API');// You can define custom common functions by overriding Check ones.Check.success=v=>!!v['result']&&v.result==='success';// Send POST message with { name: 'foo_0' } payload and check return according Check.success.ApiTest.post(`/post/user`,{name: 'foo_0'},(v)=>Check.success(v),`Post foo_0 user`,'USER API MANAGEMENT');// Send POST message with a bad payload and check that it does lead to a managed error.Check.error=v=>!!v['result']&&v.result==='failed';ApiTest.post(`/post/user`,{bad_value: 'bad_content'},(v)=>Check.error(v),`Post a bad payload to add user`,'USER API MANAGEMENT');// You have access to 'testFor' function calling the s => { ... } function for all [ 'foo_1', ... ] list elements.ApiTest.testFor(['foo_1','foo_2','foo_3'],s=>{ApiTest.post(`/post/user`,{name: s},(v)=>Check.success(v),`Post ${s} user`,'USER API MANAGEMENT');});// When every test is declared use ApiTest.run() function to run everything.ApiTest.run();
Then run tests using node interpreter
$ node tests/hello/hello.js --help
Usage: hello [options] <url>
Options:
-ss, --selfSigned <bool> Accept Self Signed Certificate
-ucao, --useAuthorityOnly <bool> Use only authority file
-ca, --authority <path> Path to certificate authority file
-cert, --certificate <path> Path to certificate file
-pk, --privateKey <path> Path to the associate private key file
-up12, --usePkcs12 <bool> Use P12 file
-p12, --pkcs12 <path> Path to the associate P12 file
-p12pw, --pkcs12Password <string> Password associated with the P12 file
-https, --useHttps <bool> Should use HTTPS instead of HTTP
-vr, --verboseResponse <bool> Should print server responses
-vr, --verboseError <bool> Should print server errors responses
-a, --await <[1; 360000] as integer> Set an await time between requests
-m, --matching <string> Execute tests matching given string
-pn, --productName <string> Execute parametered product named tests
-h, --help display help for command
$ node tests/hello/hello.js http://127.0.0.1:4200/
An utility to check if there are any changes in your files.
If you move your files around, perhaps across platforms, maybe through unreliable connections, or with the tools you don’t quite trust, that utility will help you to verify the result and spot possible errors.
It will also help you to find issues with your storage: silent data corruption, bitflips/bitrot, bad blocks, other hardware or software faults.
You could use this utility to verify the restore process for your backups.
It can show a list of duplicate files as well.
It is a CLI utility, written on Ruby. It’s cross-platform (Linux/UNIX/Windows/macOS). It’s tested to run on Ubuntu 20.04, Windows 10, and macOS Catalina.
How it works
Given the directory, it calculates digests/hashes (BLAKE2b512, SHA3-256, or SHA512-256) for the files it contains.
Those digests are then kept in a SQLite database. By default, the database is stored in the same directory with the rest of the files. You could specify any other location.
Any time later you run the tool again.
It will check if any file becomes corrupted or missed due to hardware/software issues.
It will store digests of updated files. It is assumed that if a particular file has both mtime and digest changed then it’s a sign of a legitimate update, and not of a storage fault.
New files will be added to a database.
Renames will be tracked.
If files are missed from your storage then the tool will ask for your confirmation to remove information on those files from the database.
Digest algorithms
You could change the digest algorithm at any time. Transition to a new algorithm will only occur if all files pass the check by digests which were stored using the old one.
Faster algorithms like KangarooTwelve and BLAKE3 may be added as soon as fast implementations will be available in Ruby.
Usage: file-digests [options] [path/to/directory] [path/to/database_file]
By default the current directory will be operated upon, and the database file will be placed to the current directory as well.
Should you wish to check current directory but place the database elsewhere, you could provide "." as a first argument, and the path to a database_file as a second.
-a, --auto Do not ask for any confirmation.
-d, --digest DIGEST Select a digest algorithm to use. Default is "BLAKE2b512".
You might also consider to use slower "SHA512-256" or even more slower "SHA3-256".
Digest algorithm should be one of the following: BLAKE2b512, SHA3-256, SHA512-256.
You only need to specify an algorithm on the first run, your choice will be saved to a database.
Any time later you could specify a new algorithm to change the current one.
Transition to a new algorithm will only occur if all files pass the check by digests which were stored using the old one.
-f, --accept-fate Accept the current state of files that are likely damaged and update their digest data.
-h, --help Prints this help.
-p, --duplicates Show the list of duplicate files, based on the information out of the database.
-q, --quiet Less verbose output, stil report any found issues.
-t, --test Perform a test to verify directory contents.
Compare actual files with the stored digests, check if any files are missing.
Digest database will not be modified.
-v, --verbose More verbose output.
This is the very first web app that I had ever did in AngularJS.
It was done during my 7 and a half long interview with a company I worked with, the 9th March 2017. It took me about 4 hours and a half to complete it. I did it that day when they let me do it at their offices after a quick introduction of the company by the CTO and a brief of my previous job experiencies.
Basically during that hours I studied quickly the very basic AngularJS statements to complete the application, that for these resons it is not very standardized.
It was later reviewed with the CTO, whom I explained the decisions that I made in terms of coding style and layouts.
The story continues because I got the job!
This exercise was the exercise that gave me the job and the opportunity to improve further my knowledge with AngularJS!
In this company I had the opportunity of building from scratch a new platform using more reasonable AngularJS stardards that are commonly used.
Indeed, in this exercise, there is no coding style guide, neither a folder structure and neither other AngularJS common standards (remember that this was my first ever single page app done using Angular).
All the functionalities for this app are inside one controller that consumes Vida’s APIs.
The flux is pretty much all procedural that start with a very basic login that, if successful, shows a list of clients in a HTML table. With the More Details links, all the client’s details can be seen.
Build the app to make it work
Run npm install or another package manager like yarn.
Then run gulp to build the code: this will create the dist folder.
Previously, module building always used the refpolicy framework. The default
module builder is now ‘simple’, which uses only checkmodule. Not all features are
supported with this builder.
To build modules using the refpolicy framework like previous versions did,
specify the ‘refpolicy’ builder either explicitly per module or globally
via the main class
The interfaces to the various helper manifests has been changed to be more in line
with Puppet file resource naming conventions.
You will need to update your manifests to use the new parameter names.
The selinux::restorecond manifest to manage the restorecond service no longer exists
Known problems / limitations
The selinux_python_command fact is now deprecated and will be removed in
version 4 of the module.
If SELinux is disabled and you want to switch to permissive or enforcing you
are required to reboot the system (limitation of SELinux). The module won’t
do this for you.
If SELinux is disabled and the user wants enforcing mode, the module
will downgrade to permissive mode instead to avoid transitioning directly from
disabled to enforcing state after a reboot and potentially breaking the system.
The user will receive a warning when this happens,
If you add filecontexts with semanage fcontext (what selinux::fcontext
does) the order is important. If you add /my/folder before /my/folder/subfolder
only /my/folder will match (limitation of SELinux). There is no such limitation
to file-contexts defined in SELinux modules. (GH-121)
If you try to remove a built-in permissive type, the operation will appear to succeed
but will actually have no effect, making your puppet runs non-idempotent.
The selinux_port provider may misbehave if the title does not correspond to
the format it expects. Users should use the selinux::port define instead except
when purging resources
Defining port ranges that overlap with existing ranges is currently not
detected, and will
cause semanage to error when the resource is applied.
On Debian systems, the defined types fcontext, permissive, and port do not
work because of PA-2985.
Usage
Generated puppet strings documentation with examples is available in the REFERENCE.md
It’s also included in the docs/ folder as simple html pages.
Reference
Basic usage
include selinux
This will include the module and allow you to use the provided defined types,
but will not modify existing SELinux settings on the system.
More advanced usage
class { selinux:mode => 'enforcing',
type => 'targeted',
}
This will include the module and manage the SELinux mode (possible values are
enforcing, permissive, and disabled) and enforcement type (possible values
are targeted, minimum, and mls).
Note on SELinux mode changes
Changing SELinux between enforcing/permissive and disabled requires a reboot to take effect.
When transitioning from disabled to enforcing:
The module sets SELinux to permissive, which requires a reboot to take effect.
After the reboot, the module updates the configuration and running state to enforcing.
When transitioning from enforcing to disabled:
The module sets SELinux to disabled, which requires a reboot to take effect, and sets the running state to permissive until then.
After the reboot, SELinux will be fully disabled.
Deploy a custom module using the refpolicy framework
Note that pre-compiled policy packages may not work reliably
across all RHEL / CentOS releases. It’s up to you as the user
to test that your packages load properly.
Quetzal — A RESTful API for data and metadata management.
Quetzal
Quetzal (short for Quetzalcóatl, the feathered snake), a RESTful API designed
to store data files and manage their associated metadata.
Quetzal is an application that uses Cloud storage providers and non-structured
databases to help researchers organize their data and metadata files.
Its main feature is to provide a remote, virtually infinite, storage location
for researchers’ data, while providing an API to encapsulate data/metadata
operations. In other words, researchers and teams can work with large amounts
of data that would be too large for local analyses, using Quetzal to simplify
the complexity of Cloud resource management.
Quetzal’s mid-term roadmap is to integrate with large public physiological
signal databases like PhysioNet, MIPDB, TUH, among others. Tha main objective
is to provide researchers and data scientists a unique bank of file datasets
with a unified API to access the data and to encapsulate the heteronegeity of
these datasets.
Features
There are two scenarios where Quetzal was designed to help:
Imagine you want to apply a data processing pipeline to a large dataset.
There are several solutions on how to execute and parallelize your code, but
where is the data? Moreover, imagine that you want to do a transverse study:
How do you manage the different sources? How to download them?
Quetzal provides a single data source with a simple API that will let you
define easily the scope of your study and, with a brief Python code that
uses Quetzal client, you will
be able to download your dataset.
Let’s say that you are preparing a new study implying some data collection
protocol. You could define a procedure where the data operators or technicians
take care to copy the data files in a disk, Google Drive or Dropbox, along
with the notes associated with each session, like subject study identifier,
date, age, temperature, etc. Doing this manually would be error-prone.
Moreover, the structure of these notes (i.e. the metadata) may evolve quickly,
so you either save them as manual notes, text files, or some database that
gives you the flexibility to quickly adapt its structre.
Using the Quetzal API, you automate the upload and safe storage of the study
files, associate the metadata of these files while having the liberty to set
and modify the metadata structure as you see fit.
In brief, Quetzal offers the following main features:
Storage of data files, based on cloud storage providers, which benefits
from all of the features from the provider, such as virtually infinite
storage size.
Unstructured metadata associated to each file*. Quetzal does not force
the user to organize your metadata in a particular way, it lets the user keep
whatever structure they prefer.
Structured metadata views for metadata exploration or dataset definition.
By leveraging Postgres SQL, unstructured metadata can be queried as JSON
objects, letting the user express what subset of the data they want to use.
Metadata versioning. Changes on metadata are versioned, which is
particularly useful to ensure that a dataset are reproducible.
Quetzal’s documentation is available on
readthedocs. The API documentation is
embedded into its specification; the best way to visualize it is through the
is also a
ReDoc API reference documentation site.
Image recognition for Javanese script using YOLOv4 Darknet and HD-CNN. This project uses YOLOv4 as the object detector, and each detected object will be classified by HD-CNN.
Welcome to Crypto-Tracker, your go-to solution for tracking your cryptocurrency market! Crypto-tracker is an open-source web application that empowers you to effortlessly monitor real-time market data, and stay informed about the latest crypto trends.
Crypto-Tracker is a user-friendly and responsive web application developed to provide a comprehensive solution for cryptocurrency enthusiasts. It allows you to:
Real-time Market Data: Stay up-to-date with real-time cryptocurrency prices, market capitalization, and trading volume.
News Feed: Read the latest news and updates from the cryptocurrency industry, keeping you well-informed.
Responsive Design: Enjoy a seamless experience on various devices, from desktops to smartphones.
User-friendly Interface: Navigate the application with ease, thanks to an intuitive design.
Features
Crypto-Tracker comes equipped with a range of features tailored to cryptocurrency enthusiasts:
Real-time Price Updates: Get live updates on cryptocurrency prices, market caps, and trading volumes.
News Aggregator: Stay in the loop with the latest cryptocurrency news and developments.
Responsive Design: Enjoy a seamless experience on desktops, tablets, and mobile devices.
Getting Started
To start using Crypto-tracker, follow these steps:
Clone the Repository: Clone this repository to your local machine.
Tizen app type: Companion(Operating with Samsung Galaxy S4(android 4.4))
Project Summary
머신러닝 기반 인공지능 피트니스 헬스 코치 어플리케이션.
3축가속도센서 3축자이로스코프센서를 기반으로 사용자의 모션을 실시간 트래킹 및 스케쥴링하여 ‘무슨운동’을 ‘몇회’했는지 그리고 ‘칼로리 소모량’까지 스스로 알아서 판단하고 기록하여 관리.
또한 사용자가 운동을 시작한 후 심박수를 예상하여 운동을 촉진할수있도록 사용자의 예상된 평균심장박동수와 가장 비슷한 BPM에 해당되는 음악을 스스로 찾고 알아서 재생.
About Train Models(optimized)
Performance(Accuracy): about 96.7% for unseen data [2016. 10]
Model type: Discriminative Model ( P( y | X ) ) for inference
Learning Type: Classification on Supervised Learning.
Using Dimension Reduction Skills e.g. PCA, LDA(Fisher’s LDA)
Using Kernel Tricks e.g. linear and rbf
Hybrid Stacking Model based on SVM(Support Vector Machine) Framework and others
Some outfit called “Yumpu” has stolen an earlier version of my work on this code, and is trying to make money off of it
Why I Believe Yumpu Are Thieves
These guys claim to be a self-publishing website. However, I have never given them permission to publish my work. In fact, I have no relationship at all with them. From what I can tell, they took old, publically-available versions of my work and now offer them for sale.
As far as I am concerned, they are thieves. I implore anyone reading this to avoid doing business with them.
thisoldtoolbox-edirectory
Please, please, please… read this entire page BEFORE trying to use this repo.
This repo is part of my This Old Toolbox set of repos, which collectively host various system administration/management tools I’ve written over the years, for a variety of platforms and environments. I’m making them public so that others might find the ideas and/or the accumulated knowledge helpful to whatever they need to create.
History of this specific tool
In early 2005, I was working in a NetWare environment, and there was a business need to edit almost a thousand eDirectory User objects to find and remove specific bits of information. I was tasked with engineering a solution that didn’t involve having the admins doing it by hand.
My solution leveraged the fact that the OS shipped with the Perl v5.8 interpreter and a set of Novell-supplied Perl modules (Universal Component Services, or UCS) that enabled interaction with NetWare and eDirectory.
About eDirectory
eDirectory is a multi-platform directory service, one that is a lot closer to the X.500 ideals than just about anything else. It started on Novell NetWare, but was subsequently ported to other platforms, including Linux.
IIRC, my tool was written to operate in an eDirectory v8.7 environment.
Perl on NetWare
Before you dive into my code, know that Perl on NetWare had a number of peculiarities that could trip up even an experienced Perl programmer. The Perl on NetWare NDK has a complete listing; however, some items of specific interest here include:
It is mandatory to use lexically scoped variables (with help of the my() operator) whenever possible
A script that introduces infinite loop cannot be terminated (and I can verify this from experience!)
The Perl debugger restart option is not supported
There are additional peculiarities of Perl on NetWare, as it specifically relates to the Universal Component APIs:
If you have a require or a use statement that loads the UCS API module, but you never instantiate any object provided by that module in your code, the server may ABEND
The use strict; statement is problematic, because the UCS APIs seem to perform some redefinitions that the directive doesn’t like; however, the use warnings; statement works, as do the -c and -w parameters on the Perl invocation
Novell did not publish separate Perl-oriented UCS documentation; refer to the Novell Script for NetWare (NSN) UCS documentation; the Objects, Properties and Methods are the same, only the language syntax differs (specifically, reference the Novell Developer Kit (NDK) NSN Components (Parts One and Two), as well as the NDK Novell eDirectory Schema Reference)
eDirectory error codes are not available to the Perl environment; many UCS methods return, at best, only Boolean (yes/no, TRUE/FALSE, OK/FAIL) values, and you will have to use DSTRACE.NLM to capture eDirectory error information
NetWare and eDirectory introduce some additional wrinkles in terms of syntax:
Paths to server-local files are referenced using the syntax VOLUME:/PATH/TO/FILE
eDirectory context references are in the form nds:\TREE\TOP O\OU
Since “\” is a special character in Perl, it must be escaped, and so should be represented in your code as “\\”
Adding Perl Modules to NetWare?
The Perl community has impressive libraries of add-on Perl modules (such as CPAN, the Comprehensive Perl Archive Network). However, adding the typical Perl module to NetWare’s Perl installation is a non-trivial exercise, involving either establishing a NetWare development environment, or cross-compiling on another platform. Neither choice is for the inexperienced or faint-of-heart. For the vast majority of admins, you’re limited to whatever Perl modules are included in the NetWare distribution.
Understanding eDirectory Objects and Attributes
In order to leverage my code, it is first necessary to have a firm grasp of eDirectory, and in particular Objects and their Attributes.
A quick overview of basic eDirectory Object concepts
Fundamentally, an Object a collection of groups of data of various types. Users are Objects. Printers are Objects. Groups are Objects.
Objects have Attributes. The specific Attributes of an Object are defined by its type; that is, a User Object consists of a different set of Attributes than
a Group Object. A particular Attribute (for example, the Full Name) might appear in many different Object types.
Attributes have a Value; that is, the data that the Attribute contains. Some Attributes are Multi-Valued Attributes (MVAs) and may contain more than one Value; the
membership list of a Group is a common example.
In the eDirectory world, the Schema defines (among other things) the available Attributes, the Attributes used by the various Objects (this is also called Layouts),
the data type(s) of the Values (which is known as Syntax), and the Values associated with the various Attributes.
Unlike AD, in eDirectory an OU really is an OU – it is a “container” (as envisioned in X.500) that contains other Objects in an actual 3-dimensional data representation. The Context of an Object is important; the namespace is not flat.
Understanding these inter-relationships, and the hierarchical nature of eDirectory, is important to understanding how to access and safely manipulate the eDirectory Tree when using a direct tool such as the UCS API.
My code confines itself to using UCS to access User Objects; however, that is an artificial limitation. Many other Object types exist and are accessible via the UCS APIs,
and the APIs provide many other methods beyond those presented in my code.
edir_tool.pl is a template!
While the edir_tool.pl file is based on code I actually ran in a production environment, here I would call it proof-of-concept. You can probably take it and, with a few minor tweaks, get it to run in an appropriate NetWare environment; but it’s almost 20 years old, and I didn’t keep up with NetWare after v6.5.
The variables you’d probably need to adjust include (at minimum):
$Tree
$Top_O
$OU
$login_context
$ServerIP
Keep in mind that, as written, even when it is working flawlessly, it doesn’t do anything except login to eDirectory, perform a simple search, and write a simple log file.
Additional Perl Code
Now that you understand the basics, to get useful work done, you need to manipulate Objects. This repo has a number of code examples that show how to approach various operations. These files are not stand-alone code; they must be integrated into a larger program like edir_tool.pl.
create_object.pl
Demonstrates the basics of creating a new Object in an eDirectory Tree.
delete_object.pl
An example of deleting an existing eDirectory Object from the eDirectory Tree.
enumerate.pl
This code snippet gives you some tools to explore the Schema, by listing the Layouts and Attributes of the Objects defined in the eDirectory Tree. Know before you go.
find_object.pl
In this file, I provide code to find specific eDirectory Objects, without using the Search method, or by using the Item method to look in the results of a search.
get_object.pl
Provides an example of reading Values from the Attributes of an Object.