Blog

  • Celma

    Celma

    C++ Extension Library Modules for Application

    Version: 1.47.0

    Quality Gate Status

    This library offers a collection of modules that are often needed in developping applications on UNIX. The main modules are:

    • Argument Handler

      Command line argument parsing, store the values directly in the destination variables.

    • Logging

      Easy to use logging library.

    • Type name

      Gives the type name as string.

    • Indirect access

      Create a data structure and an object that allows to access members of the structure by name or id.

    • Formatting

      Various functions and modules for formatting purposes, e.g. text-block formatting, fast integer to string conversion etc.

    Requirements:

    • Boost library
    • C++ 17 compliant compiler

    Building

    Requirements

    To build the software, the following components are required:

    • Boost Development: Libraries and headerfiles.
    • CMake
    • g++ compiler

    To build the documentation, doxygen is required.
    To run the coverage analysis, lcov is needed additionally.

    Compile

    There is a makefile in the top-level directory that supports all build targets:

    • make debug
      Builds the debug version with the lowest C++ version that is currently supported, e.g. C++ 17.
    • make release
      Builds the release version (optimised) with the lowest C++ version that is currently supported, e.g. C++ 17.
    • make debug-<C++ version>/make release-<C++ version>
      Builds the debug or release version with the specified C++ version.
    • make doxgen
      Creates the HTML documentation from the source code using doxygen.

    The steps for building are:

    • Check if the build/ directory exists.
    • If not:
    • Create the directory.
    Visit original content creator repository https://github.com/Gemini67/Celma
  • awesome-symfony

    Awesome Symfony

    A curated list of useful Symfony snippets.

    Contributions are highly encouraged and very welcome πŸ™‚

    Table of Contents

    Configuration

    Assets

    Assets Base URL

    Symfony 2.6

    # app/config/config.yml
    framework:
        templating:
            assets_base_urls:
                http: ['http://cdn.domain.com']
                ssl:  ['https://secure.domain.com']
            packages:
                # ...

    Symfony 2.7+

    # app/config/config.yml
    framework:
        assets:
            base_path: ~
            base_urls: ['http://cdn.domain.com', 'https://secure.domain.com']

    [1]

    Assets Base URL (Protocol-Relative)

    # app/config/config.yml 
    framework: 
        templating: 
            assets_base_urls: '//static.domain.com/images'

    Assets Versioning

    Symfony 2.6

    # app/config/config.yml
    framework:
        templating:
            assets_version: 'v5'
            assets_version_format: '%%s?version=%%s'

    Symfony 2.7+

    # app/config/config.yml
    framework:
        assets:
            version: 'v5'
            version_format: '%%s?version=%%s'

    [1]

    Named Assets

    # app/config/config.yml 
    assetic:
        assets:
            bootstrap_js:
                inputs:
                    - '@AppBundle/Resources/public/js/jquery.js'
                    - '@AppBundle/Resources/public/js/bootstrap.js'

    Using in Twig templates:

    {% javascripts
        '@bootstrap_js'
        '@AppBundle/Resources/public/js/*' %}
            <script src="{{ asset_url }}"></script>
    {% endjavascripts %}

    Context-Aware CDNs

    # app/config/config.yml
    framework:
        assets:
            base_urls:
                - 'http://static1.domain.com/images/'
                - 'https://static2.domain.com/images/'

    Using in Twig templates:

    {{ asset('logo.png') }}
    {# in a regular page: http://static1.domain.com/images/logo.png #}
    {# in a secure page:  https://static2.domain.com/images/logo.png #}

    [1]

    Packages (Different Base URLs)

    To specify different base URLs for assets, group them into packages:

    # app/config/config.yml
    framework:
        # ...
        assets:
            packages:
                avatars:
                    base_urls: 'http://static_cdn.domain.com/avatars'

    Using the avatars package in a Twig template:

    <img src="{{ asset('...', 'avatars') }}" />

    Directories, Paths

    Get the Project Root Directory

    Use the config parameter %kernel.root_dir%/../:

    some_service:
        class: \path\to\class
        arguments: [%kernel.root_dir%/../]

    Symfony 2
    In a Controller:

    $projectRoot = $this->container->getParameter('kernel.root_dir');

    Symfony 3.3

    In a Controller:

    $projectRoot = $this->get('kernel')->getProjectDir();

    Symfony 4+

    Using autowiring (argument binding):

    # config/services.yaml
    services:
        _defaults:
            bind:
                string $projectDir: '%kernel.project_dir%'

    Then in your class:

    class YourClass
    {
        private $projectDir;
    
        public function __construct(string $projectDir)
        {
            $this->$projectDir = $projectDir;
        }
    
        // ...

    Email Errors

    Email Logs Related to 5xx Errors (action_level: critical)

    # app/config/config_prod.yml 
    monolog:
        handlers:
            mail:
                type: fingers_crossed
                action_level: critical
                handler: buffered
            buffered:
                type: buffer
                handler: swift
            swift:
                type: swift_mailer
                from_email: error@domain.com
                to_email: error@domain.com
                subject: An Error Occurred!
                level: debug

    Email Logs Related to 4xx Errors (action_level: error)

    # app/config/config_prod.yml
    monolog:
        handlers:
            mail:
                type: fingers_crossed
                action_level: error
                handler: buffered
            buffered:
                type: buffer
                handler: swift
            swift:
                type: swift_mailer
                from_email: error@domain.com
                to_email: error@domain.com
                subject: An Error Occurred!
                level: debug

    Do Not Email Logs for 404 Errors (excluded_404)

    # app/config/config_prod.yml
    monolog:
        handlers:
            mail:
                type: fingers_crossed
                action_level: error
                excluded_404:
                    - ^/
                handler: buffered 
            buffered:
                type: buffer
                handler: swift
            swift:
                type: swift_mailer
                from_email: error@domain.com
                to_email: error@domain.com
                subject: An Error Occurred!
                level: debug

    Import

    Import Mixed Configuration Files

    # app/config/config.yml
    imports:
        - { resource: '../common/config.yml' }
        - { resource: 'dynamic-config.php' }
        - { resource: 'parameters.ini' }
        - { resource: 'security.xml' }
        # ...

    Import All Resources From a Directory

    # app/config/config.yml
    imports:
        - { resource: '../common/' }
        - { resource: 'acme/' }
        # ...

    Import Configuration Files Using Glob Patterns

    Symfony 3.3

    # app/config/config.yml
    imports:
        - { resource: "*.yml" }
        - { resource: "common/**/*.xml" }
        - { resource: "/etc/myapp/*.{yml,xml}" }
        - { resource: "bundles/*/{xml,yaml}/services.{yml,xml}" }
        # ...

    [1]

    Log

    Enable the Monolog processor PsrLogMessageProcessor

    # app/config/config_prod.yml
    services:
        monolog_processor:
            class: Monolog\Processor\PsrLogMessageProcessor
            tags:
                - { name: monolog.processor }

    Hide Event Logs

    # app/config/dev.yml
    monolog:
        handlers:
            main:
                type: stream
                path: "%kernel.logs_dir%/%kernel.environment%.log"
                level: debug
                channels: "!event"

    Organizing Log Files Using Channels (Log Messages to Different Files)

    # app/config/config_prod.yml
    monolog:
        handlers:
            main:
                type: stream
                path: "%kernel.logs_dir%/%kernel.environment%.log"
                level: debug
                channels: ["!event"]
            security:
                type: stream
                path: "%kernel.logs_dir%/security-%kernel.environment%.log"
                level: debug
                channels: "security"

    [1]

    Security

    Impersonating Users

    # app/config/security.yml
    security:
        firewalls:
            main:
                # ...
                switch_user: true

    Switching the user in the URL:
    http://domain.com/path?_switch_user=john

    Session

    Define Session Lifetime

    # app/config/config.yml
    framework:
        session:
            cookie_lifetime: 3600

    Profiler

    Enable the Profiler on Prod For Specific Users

    # app/config/config.yml
    framework:
        # ...
        profiler:
           matcher:
               service: app.profiler_matcher
    
    services:
        app.profiler_matcher:
            class: AppBundle\Profiler\Matcher
            arguments: ["@security.context"]

    namespace AppBundle\Profiler; 
    
    use Symfony\Component\Security\Core\SecurityContext; 
    use Symfony\Component\HttpFoundation\Request; 
    use Symfony\Component\HttpFoundation\RequestMatcherInterface; 
    
    class Matcher implements RequestMatcherInterface 
    { 
        protected $securityContext; 
    
        public function __construct(SecurityContext $securityContext)
        {
            $this->securityContext = $securityContext; 
        } 
    
        public function matches(Request $request)
        { 
            return $this->securityContext->isGranted('ROLE_ADMIN'); 
        }
    }

    Console

    Parallel Asset Dump

    Symfony 2.8

        $ php app/console --env=prod assetic:dump --forks=4

    Symfony 3+

        $ php bin/console --env=prod assetic:dump --forks=4

    Controller

    Cookie

    Set a Cookie

    use Symfony\Component\HttpFoundation\Cookie;
    
    $response->headers->setCookie(new Cookie('site', 'bar'));

    Directories, Paths, and Filesystem

    Root Directory of the Project

    The parameter kernel.root_dir points to the app directory. To get to the root project directory, use kernel.root_dir/../

    realpath($this->getParameter('kernel.root_dir')."/../")

    Check If a Path is Absolute

    use Symfony\Component\Filesystem\Filesystem;
    
    //...
    
    $fs = new FileSystem();
    $fs->isAbsolutePath('/tmp'); // return true
    $fs->isAbsolutePath('c:\\Windows'); // return true
    $fs->isAbsolutePath('tmp'); // return false
    $fs->isAbsolutePath('../dir'); // return false

    [1]

    Download

    Download (Serve) a Static File

    use Symfony\Component\HttpFoundation\BinaryFileResponse;
    
    // ...
    
    return new BinaryFileResponse('path/to/file');

    Check If a File Exists

    use Symfony\Component\Filesystem\Filesystem;
    
    //...
    
    $fs = new FileSystem();
    if (!$fs->exists($filepath)) {
        throw $this->createNotFoundException();
    }

    [1]

    Download a File Without Directly Expose it and Change the Filename

    use Symfony\Component\HttpFoundation\BinaryFileResponse;
    use Symfony\Component\Filesystem\Filesystem;
    
    $filename = // define your filename
    $basePath = $this->getParameter('kernel.root_dir').'/../uploads';
    $filePath = $basePath.'/'.$filename;
    
    $fs = new FileSystem();
    if (!$fs->exists($filepath)) {
        throw $this->createNotFoundException();
    }
    
    $response = new BinaryFileResponse($filePath);
    $response->trustXSendfileTypeHeader();
    $response->setContentDisposition(
        ResponseHeaderBag::DISPOSITION_INLINE,
        $filename,
        iconv('UTF-8', 'ASCII//TRANSLIT', $filename)
    );
    
    return $response;

    Using X-Sendfile Header with BinaryFileResponse

    BinaryFileResponse supports X-Sendfile (Nginx and Apache). To use of it,
    you need to determine whether or not the X-Sendfile-Type header should be
    trusted and call trustXSendfileTypeHeader() if it should:

    BinaryFileResponse::trustXSendfileTypeHeader();

    or

    $response = new BinaryFileResponse($filePath);
    $response->trustXSendfileTypeHeader();

    Flash Messages

    Set Multiple Flash Messages

    use Symfony\Component\HttpFoundation\Session\Session;
    
    $session = new Session();
    $session->start();
    
    $session->getFlashBag()->add(
        'warning',
        'Your config file is writable, it should be set read-only'
    );
    $session->getFlashBag()->add('error', 'Failed to update name');
    $session->getFlashBag()->add('error', 'Invalid email');

    [1]

    Form

    Get Errors for All Form Fields

    Symfony 2.5+

    $allErrors = $form->getErrors(true);

    JSON

    Avoiding XSSI JSON Hijacking (only GET requests are vulnerable)

    Pass an associative array as the outer-most array to JsonResponse
    and not an indexed array so that the final result is an object:
    {"object": "not inside an array"}
    instead of an array:
    [{"object": "inside an array"}]

    Create a JSON Response with JsonResponse Class

    use Symfony\Component\HttpFoundation\JsonResponse;
    
    $response = new JsonResponse();
    $response->setData(array(
        'name' => 'John'
    ));

    Create a JSON Response with Response Class

    use Symfony\Component\HttpFoundation\Response;
    
    $response = new Response();
    $response->setContent(json_encode(array(
        'name' => 'John',
    )));
    $response->headers->set('Content-Type', 'application/json');

    Set JSONP Callback Function

    $response->setCallback('handleResponse');

    Redirect

    Redirect to Another URL

    return $this->redirect('http://domain.com');

    or

    use Symfony\Component\HttpFoundation\RedirectResponse;
    
    $response = new RedirectResponse('http://domain.com');

    Request

    Get the Request Object

    Symfony 2

    namespace Acme\FooBundle\Controller;
    
    class DemoController
    {
       public function showAction()
       {
           $request = $this->getRequest();
           // ...
       }
    }

    Symfony 3

    namespace Acme\FooBundle\Controller;
    
    use Symfony\Component\HttpFoundation\Request;
    
    class DemoController
    {
       public function showAction(Request $request)
       {
           // ...
       }
    }

    [1]

    Get the Request Raw Data Sent with the Request Body

    $content = $request->getContent();

    Fetch a GET Parameter

    $request->query->get('site');

    Fetch a GET Parameter in array format (data[‘name’])

    $request->query->get('data')['name'];

    Fetch a POST Parameter

    $request->request->get('name');

    Fetch a GET Parameter Specifying the Data Type

    $isActive = $request->query->getBoolean('active');
    $page     = $request->query->getInt('page'); 

    Other methods are:

    • getAlpha(‘param’);
    • getAlnum(‘param’);
    • getDigits(‘param’);
      [1]

    Response

    Set a HTTP Status Code

    use Symfony\Component\HttpFoundation\Response;
    
    $response->setStatusCode(Response::HTTP_NOT_FOUND);

    Routing

    External URLs

    google_search:
        path: /search
        host: www.google.com
    <a href="{{ url('google_search', {q: 'Jules Verne'}) }}">Jules Verne</a>

    External URLs – Using a Key to Reference a URL

    framework:
        assets:
            packages:
                symfony_site:
                    version: ~
                    base_urls: 'https://symfony.com/images'

    Add images from the URL above into your views, using the “symfony_site” key in the second argument of asset():

    <img src="{{ asset('logos/header-logo.svg', 'symfony_site') }}">

    Generate Absolute URL

    Symfony 2

    $this->generateUrl('blog_show', array('slug' => 'my-blog-post'), true);

    Symfony 3

    $this->generateUrl('blog_show', array('slug' => 'my-blog-post'), UrlGeneratorInterface::ABSOLUTE_URL);

    Trailing Slash with an Optional Parameter

    my_route:
        pattern:  /blog/{var}
        defaults: { _controller: TestBundle:Blog:index, var: ''}
        requirements:
            var: ".*"

    Service

    Retrieve a Service

    $this->get('service.name');

    or

    $this->container->get('service.name'),

    Symfony 4+

    Using autowiring, just type-hint the desired service. E.g. getting the routing service:

    use Symfony\Component\Routing\RouterInterface;
    
    class SomeClass
    {
        private $router;
    
        public function __construct(RouterInterface $router)
        {
            $this->router = $router;
        }
    
        public function doSomething($id)
        {
            $url = $this->router->generate('route_name', ['id' => $id]);
    
            // ...
        }
    
        // ...

    YAML

    Parse YAML File

    use Symfony\Component\Yaml\Exception\ParseException;
    
    try {
        $value = Yaml::parse(file_get_contents('/path/to/file.yml'));
    } catch (ParseException $e) {
        printf("Unable to parse the YAML string: %s", $e->getMessage());
    }

    [1]

    Environment Variables

    Custom Loader for Environment Variables

    Symfony 4.4

    # config/services.yaml
    bind:
        string $name: '%env(name)%'

    Implement the EnvVarLoaderInterface in a service:

    namespace App\Env;
    
    use Symfony\Component\DependencyInjection\EnvVarLoaderInterface;
    
    class ConsulEnvVarLoader implements EnvVarLoaderInterface
    {
        public function loadEnvVars(): array
        {
            $response = file_get_contents('http://127.0.0.1:8500/v1/kv/website-config');
    
            $consulValue = json_decode($response, true)[0]['Value'];
            $decoded = json_decode(base64_decode($consulValue), true);
    
            // e.g.:
            // array:1 [
            //     "name" => "my super website"
            // ]
    
            return $decoded;
        }
    }

    Update the consul KV:

    ./consul  kv put website-config '{"name": "Symfony read this var from consul"}'

    [1]

    Twig

    Absolute URLs

    Symfony 2.6

    {{ asset('logo.png', absolute = true) }}

    Symfony 2.7+

    {{ absolute_url(asset('logo.png')) }}

    Assets Versioning

    Symfony 2.6

    {{ asset('logo.png', version = 'v5') }}

    Symfony 2.7+
    Version is automatically appended.

    {{ asset('logo.png') }}
    
    {# use the asset_version() function if you need to output it manually #}
    {{ asset_version('logo.png') }}

    [1]

    Get the Authenticated Username

    {{ app.user.username }}

    Localized Date String

    In your Twig template, you can use pre-defined or custom date formats with the localizeddate:

    {{ blog.created|localizeddate('none', 'none', 'pt_BR', null, "cccc, d MMMM Y 'Γ s' hh:mm aaa")}}

    The pattern "cccc, d MMMM Y 'Γ s' hh:mm aaa" will show the date in this format:

    domingo, 5 janeiro 2014 Γ s 03:00 am

    Get the Base URL

    {{ app.request.getSchemeAndHttpHost() }}

    Inject All GET Parameters in a Route

    {{ path('home', app.request.query.all) }}

    Make the form_rest() and form_end() not Display a Specific Field

    Mark the field as rendered (setRendered)

    {% do form.somefield.setRendered %}

    Render a Template without a Specific Controller for a Static Page

    Use the special controller FrameworkBundle:Template:template in the route definition:

    # AppBundle/Resources/config/routing.yml
    static_page:
        path: /about
        defaults:
            _controller: FrameworkBundle:Template:template
            template: AppBundle:default:about.html.twig

    Override the 404 Error Template

    Create a new error404.html.twig template at:

    app/Resources/TwigBundle/views/Exception/

    [1]

    Render a Controller Asynchronously

    {{ render_hinclude(controller('AppBundle:Features:news', {
        'default': 'Loading...'
    })) }}

    [1]

    Render Just the Close Form HTML Tag

    {{ form_end(form, {'render_rest': false}) }}

    Using Localized Data (Date, Currency, Number, …)

    Enable the intl twig extension in config.yml or services.yml file:

    services:
        twig.extension.intl:
            class: Twig_Extensions_Extension_Intl
            tags:
                - { name: twig.extension }

    Visit original content creator repository
    https://github.com/andreia/awesome-symfony

  • piaf-ufrn

    PIAF – Portal de InscriΓ§Γ΅es de Atividades FΓ­sicas da COESPE/UFRN

    Bem-vindo ao repositΓ³rio oficial da PIAF, o Portal de InscriΓ§Γ΅es de Atividades FΓ­sicas da UFRN. Este projeto foi desenvolvido para facilitar a inscriΓ§Γ£o em atividades fΓ­sicas oferecidas pela COESPE na UFRN, proporcionando uma experiΓͺncia simples e eficiente tanto para os usuΓ‘rios quanto para os administradores (bolsistas).

    πŸ“– Sobre o Projeto

    O PIAF Γ© uma plataforma que permite aos usuΓ‘rios:

    • Visualizar as modalidades de atividades fΓ­sicas disponΓ­veis.
    • Realizar inscriΓ§Γ΅es de forma rΓ‘pida e prΓ‘tica.
    • Acompanhar suas turmas e resultados.
    • Entrar em listas de espera.
    • Renovar matrΓ­culas.
    • NΓ£o enfrentar filas grandes.

    AlΓ©m disso, os administradores tΓͺm acesso a ferramentas para gerenciar as atividades, como:

    • Criar novas turmas.
    • Gerenciar inscriΓ§Γ΅es.
    • Visualizar relatΓ³rios de participaΓ§Γ£o.
    • Efetuar chamadas de presenΓ§a.
    • Remover alunos inativos.
    • Enviar chamados via e-mail para os candidatos e alunos.
    • Ver estatΓ­sticas do sistema.

    O objetivo principal do PIAF Γ© reduzir a burocracia e otimizar o processo de inscriΓ§Γ£o, beneficiando tanto os participantes quanto os bolsistas responsΓ‘veis pelas atividades. Como consquΓͺncia, este sistema impacta no meio ambiente, jΓ‘ que as pessoas nΓ£o terΓ£o necessidade de se locomover com seus devidos veΓ­culos atΓ© a instituiΓ§Γ£o (mesmo sem ter garantia da vaga) e tambΓ©m a reduΓ§Γ£o do consumo de papel e tinta de impressora.

    Impacto no meio ambiente

    Aproveitando o gancho da questΓ£o ambiental, temos alguns dados aproximados no impacto que serΓ‘ reduzido nos dias das inscriΓ§Γ΅es. Vamos levar em consideraΓ§Γ£o que foram aproximadamente 1000 pessoas realizar as inscriΓ§Γ΅es (se todas que se inscreveram, estiverem presentes), por incrΓ­vel mil Γ© um nΓΊmero arredondado para baixo do valor real.

    Para calcular a emissΓ£o de carbono, precisamos considerar alguns fatores:

    1. AutomΓ³veis: A emissΓ£o mΓ©dia de diΓ³xido de carbono (COβ‚‚) por quilΓ΄metro varia dependendo do tipo de combustΓ­vel e eficiΓͺncia do veΓ­culo. Para carros a gasolina, estima-se cerca de 120 gramas de COβ‚‚ por quilΓ΄metro. Com 1000 pessoas percorrendo aproximadamente 40 km (soma da ida e volta), a emissΓ£o total seria aproximadamente 4,800 kg de COβ‚‚ !

    2. Γ”nibus: Γ”nibus geralmente emitem menos COβ‚‚ por passageiro, mas como vocΓͺ serΓ£o 2 Γ΄nibus (ida e volta) e tambΓ©m o cΓ­rcular para ir e outro para voltar, a emissΓ£o total dependerΓ‘ do tipo de combustΓ­vel e eficiΓͺncia. Um Γ΄nibus a diesel pode emitir cerca de 1,300 gramas de COβ‚‚ por quilΓ΄metro. Para 4 Γ΄nibus percorrendo 40 km, a emissΓ£o total seria aproximadamente 208 kg de COβ‚‚.

    Mesmo que todos estivessem de Γ΄nibus, ainda terΓ­amos uma quantidade significativa de diΓ³xido de carbono na atmosfera, entΓ£o, zero Γ© um nΓΊmero bem melhor! JΓ‘ que ninguΓ©m precisarΓ‘ se locomover (exceto idosos que nΓ£o souberem usar o sistema, mas mesmo assim Γ© um nΓΊmero muito menor).

    πŸš€ Funcionalidades

    Para UsuΓ‘rios

    1. Cadastro e Login:

      • CriaΓ§Γ£o de conta com informaΓ§Γ΅es bΓ‘sicas.
      • Login seguro com autenticaΓ§Γ£o.
    2. InscriΓ§Γ£o em Atividades:

      • VisualizaΓ§Γ£o de todas as modalidades disponΓ­veis.
      • InscriΓ§Γ£o rΓ‘pida com apenas um clique.
    3. Minhas Turmas:

      • Acompanhamento das turmas em que o usuΓ‘rio estΓ‘ inscrito.
      • InformaΓ§Γ΅es detalhadas sobre horΓ‘rios, vagas e status.
    4. Resultados:

      • Acesso aos resultados das atividades realizadas.

    Para Administradores

    1. Painel de Controle:

      • Acesso a um dashboard com informaΓ§Γ΅es gerais sobre as atividades.
    2. Gerenciamento de Turmas:

      • CriaΓ§Γ£o, ediΓ§Γ£o e exclusΓ£o de turmas.
      • Controle de vagas e participantes.
    3. RelatΓ³rios:

      • VisualizaΓ§Γ£o de relatΓ³rios detalhados sobre inscriΓ§Γ΅es e participaΓ§Γ£o.

    πŸ› οΈ Tecnologias Utilizadas

    • Ruby on Rails: Framework principal para o desenvolvimento do backend.
    • PostgreSQL: Banco de dados utilizado para armazenar informaΓ§Γ΅es de usuΓ‘rios, turmas e inscriΓ§Γ΅es.
    • Bootstrap: Framework CSS para estilizaΓ§Γ£o e responsividade. Quase que metade do frontend Γ© baseado em Bootstrap.
    • JavaScript: Para funcionalidades dinΓ’micas do carrousel e do FAQ.

    πŸ“‚ Estrutura do Projeto

    DiretΓ³rios Principais

    • app/: ContΓ©m os arquivos principais da aplicaΓ§Γ£o, como controllers, models e views.
    • app/views/pages/: ContΓ©m as pΓ‘ginas principais, como a pΓ‘gina inicial e FAQ.
    • app/views/shared/: ContΓ©m componentes reutilizΓ‘veis, como a barra lateral e cabeΓ§alho.
    • config/: ConfiguraΓ§Γ΅es da aplicaΓ§Γ£o, incluindo rotas.
    • db/: Arquivos relacionados ao banco de dados, como migraΓ§Γ΅es e seeds.

    βš™οΈ ConfiguraΓ§Γ£o do Ambiente

    PrΓ©-requisitos para funcionar

    1. Ruby: 3.2 ou superior.
    2. Rails: 8.0.1.
    3. PostgreSQL: Instalado e configurado.

    Passos para ConfiguraΓ§Γ£o inicial

    1. Clone o repositΓ³rio e acesse a pasta:

      git clone https://github.com/otsuki-dev/piaf-ufrn
      cd piaf-ufrn
    2. Instale as dependΓͺncias:

      bundle install
      yarn install
    3. Configure o banco de dados:

      rails db:create db:migrate db:seed
    4. Inicie o servidor:

      rails server
    5. Acesse a aplicaΓ§Γ£o no navegador:

      http://localhost:3000
      

    πŸ–ΌοΈ Estrutura de Interface

    PΓ‘gina Inicial (home)

    • Apresenta informaΓ§Γ΅es sobre o portal e as modalidades disponΓ­veis.

    FAQ (Perguntas frequentes)

    • Responde Γ s perguntas mais frequentes sobre o funcionamento do portal.

    Painel do UsuΓ‘rio (pΓ‘gina com login)

    • Permite que o usuΓ‘rio visualize suas turmas e resultados.

    Painel do Administrador (pΓ‘gina dos bolsistas)

    • Oferece ferramentas para gerenciar turmas e visualizar relatΓ³rios.

    πŸ“‹ ContribuiΓ§Γ£o

    ContribuiΓ§Γ΅es sΓ£o muito bem-vindas! Siga os passos abaixo para contribuir:

    1. FaΓ§a um fork do repositΓ³rio.
    2. Crie uma branch para sua feature:
      git checkout -b minha-feature
    3. FaΓ§a as alteraΓ§Γ΅es e commit:
      git commit -m "Exemplo de commit de um contibuidor B)"
    4. Envie um pull request.

    πŸ›‘οΈ LicenΓ§a

    Este projeto estΓ‘ licenciado sob a MIT License. VocαΊ½ pode contribuir e modificar ao seu critΓ©rio, porΓ©m nΓ£o se esqueΓ§a dos crΓ©titos de licenΓ§a, deu muito trabalho! πŸ˜‰

    πŸ“ž Contato

    Para dΓΊvidas ou sugestΓ΅es, entre em contato com o @felipe-sbm:

    Visit original content creator repository
    https://github.com/otsuki-dev/piaf-ufrn

  • dbt_hubspot

    HubSpot dbt Package (Docs)

    What does this dbt package do?

    • Produces modeled tables that leverage HubSpot data from Fivetran’s connector in the format described by this ERD.
    • Enables you to better understand your HubSpot email and engagement performance. The package achieves this by performing the following:
      • Generates models for contacts, companies, and deals with enriched email and engagement metrics.
      • Provides analysis-ready event tables for email and engagement activities.
    • Generates a comprehensive data dictionary of your source and modeled HubSpot data through the dbt docs site.

    The following table provides a detailed list of all tables materialized within this package by default.

    TIP: See more details about these tables in the package’s dbt docs site.

    Table Description
    hubspot__companies Each record represents a company in Hubspot, enriched with metrics about engagement activities.
    hubspot__company_history Each record represents a change to a company in Hubspot, with valid_to and valid_from information.
    hubspot__contacts Each record represents a contact in Hubspot, enriched with metrics about email and engagement activities.
    hubspot__contact_history Each record represents a change to a contact in Hubspot, with valid_to and valid_from information.
    hubspot__contact_lists Each record represents a contact list in Hubspot, enriched with metrics about email activities.
    hubspot__deals Each record represents a deal in Hubspot, enriched with metrics about engagement activities.
    hubspot__deal_stages Each record represents when a deal stage changes in Hubspot, enriched with metrics about deal activities.
    hubspot__deal_history Each record represents a change to a deal in Hubspot, with valid_to and valid_from information.
    hubspot__tickets Each record represents a ticket in Hubspot, enriched with metrics about engagement activities and information on associated deals, contacts, companies, and owners.
    hubspot__daily_ticket_history Each record represents a ticket’s day in Hubspot with tracked properties pivoted out into columns.
    hubspot__email_campaigns Each record represents a email campaign in Hubspot, enriched with metrics about email activities.
    hubspot__email_event_* Each record represents an email event in Hubspot, joined with relevant tables to make them analysis-ready.
    hubspot__email_sends Each record represents a sent email in Hubspot, enriched with metrics about opens, clicks, and other email activity.
    hubspot__engagement_* Each record represents an engagement event in Hubspot, joined with relevant tables to make them analysis-ready.

    Materialized Models

    Each Quickstart transformation job run materializes 147 models if all components of this data model are enabled. This count includes all staging, intermediate, and final models materialized as view, table, or incremental.

    How do I use the dbt package?

    Step 1: Prerequisites

    To use this dbt package, you must have the following:

    • At least one Fivetran HubSpot connection syncing data into your destination.
    • A BigQuery, Snowflake, Redshift, PostgreSQL, or Databricks destination.

    Databricks Dispatch Configuration

    If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your dbt_project.yml. This is required in order for the package to accurately search for macros within the dbt-labs/spark_utils then the dbt-labs/dbt_utils packages respectively.

    dispatch:
      - macro_namespace: dbt_utils
        search_order: ['spark_utils', 'dbt_utils']

    Database Incremental Strategies

    Many of the models in this package are materialized incrementally, so we have configured our models to work with the different strategies available to each supported warehouse.

    For BigQuery and Databricks All Purpose Cluster runtime destinations, we have chosen insert_overwrite as the default strategy, which benefits from the partitioning capability.

    For Databricks SQL Warehouse destinations, models are materialized as tables without support for incremental runs.

    For Snowflake, Redshift, and Postgres databases, we have chosen delete+insert as the default strategy.

    Regardless of strategy, we recommend that users periodically run a --full-refresh to ensure a high level of data quality.

    Step 2: Install the package

    Include the following hubspot package version in your packages.yml file:

    TIP: Check dbt Hub for the latest installation instructions or read the dbt docs for more information on installing packages.

    packages:
      - package: fivetran/hubspot
        version: [">=1.0.0", "<1.1.0"] # we recommend using ranges to capture non-breaking changes automatically

    All required sources and staging models are now bundled into this transformation package. Do not include fivetran/hubspot_source in your packages.yml since this package has been deprecated.

    Databricks dispatch configuration

    If you are using a Databricks destination with this package, you must add the following (or a variation of the following) dispatch configuration within your dbt_project.yml. This is required in order for the package to accurately search for macros within the dbt-labs/spark_utils then the dbt-labs/dbt_utils packages respectively.

    dispatch:
      - macro_namespace: dbt_utils
        search_order: ['spark_utils', 'dbt_utils']

    Step 3: Define database and schema variables

    By default, this package runs using your destination and the hubspot schema. If this is not where your hubspot data is (for example, if your hubspot schema is named hubspot_fivetran), add the following configuration to your root dbt_project.yml file:

    vars:
        hubspot_database: your_destination_name
        hubspot_schema: your_schema_name

    Step 4: Disable/enable models and sources

    When setting up your Hubspot connection in Fivetran, it is possible that not every table this package expects will be synced. This can occur because you either don’t use that functionality in Hubspot or have actively decided to not sync some tables. Therefore we have added enable/disable configs in the src.yml to allow you to disable certain sources not present. Downstream models are automatically disabled as well. In order to disable the relevant functionality in the package, you will need to add the relevant variables in your root dbt_project.yml. By default, all variables are assumed to be true, with the exception of:

    • hubspot_service_enabled
    • hubspot_ticket_deal_enabled
    • hubspot_contact_merge_audit_enabled
    • hubspot_merged_deal_enabled
    • hubspot_engagement_communication_enabled

    These default to false and must be explicitly enabled if needed. You only need to add variables for the sources that differ from their defaults.

    vars:
      # Marketing
    
      hubspot_marketing_enabled: false                        # Disables all marketing models
      hubspot_contact_enabled: false                          # Disables the contact models
      hubspot_contact_form_enabled: false                     # Disables form and contact form submission data and its relationship to contacts
      hubspot_contact_list_enabled: false                     # Disables contact list models
      hubspot_contact_list_member_enabled: false              # Disables contact list member models
      hubspot_contact_merge_audit_enabled: true               # Enables the use of the CONTACT_MERGE_AUDIT table (deprecated by Hubspot v3 API) for removing merged contacts in the final models.
                                                              # If false, contacts will still be merged using the CONTACT.property_hs_calculated_merged_vids field.
                                                              # Default = false
      hubspot_contact_property_enabled: false                 # Disables the contact property models
      hubspot_contact_property_history_enabled: false         # Disables the contact property history models
      hubspot_email_event_enabled: false                      # Disables all email_event models and functionality
      hubspot_email_event_bounce_enabled: false
      hubspot_email_event_click_enabled: false
      hubspot_email_event_deferred_enabled: false
      hubspot_email_event_delivered_enabled: false
      hubspot_email_event_dropped_enabled: false
      hubspot_email_event_forward_enabled: false
      hubspot_email_event_click_enabled: false
      hubspot_email_event_open_enabled: false
      hubspot_email_event_print_enabled: false
      hubspot_email_event_sent_enabled: false
      hubspot_email_event_spam_report_enabled: false
      hubspot_email_event_status_change_enabled: false
    
      # Sales
    
      hubspot_sales_enabled: false                            # Disables all sales models
      hubspot_company_enabled: false
      hubspot_company_property_history_enabled: false         # Disables the company property history models
      hubspot_deal_enabled: false
      hubspot_deal_company_enabled: false
      hubspot_deal_contact_enabled: false
      hubspot_deal_property_history_enabled: false            # Disables the deal property history models
      hubspot_engagement_enabled: false                       # Disables all engagement models and functionality
      hubspot_engagement_call_enabled: false
      hubspot_engagement_company_enabled: false
      hubspot_engagement_communication_enabled: true          # Enables the link between communications and engagements
      hubspot_engagement_contact_enabled: false
      hubspot_engagement_deal_enabled: false
      hubspot_engagement_email_enabled: false
      hubspot_engagement_meeting_enabled: false
      hubspot_engagement_note_enabled: false
      hubspot_engagement_task_enabled: false
      hubspot_merged_deal_enabled: true                       # Enables the merged_deal table to filter merged deals from final models. Default = false
      hubspot_owner_enabled: false
      hubspot_property_enabled: false                         # Disables property and property_option tables
      hubspot_role_enabled: false                             # Disables role metadata
      hubspot_team_enabled: false                             # Disables team metadata
      hubspot_team_user_enabled: false                        # Disables user-to-team relationships
    
      # Service
      hubspot_service_enabled: true                           # Enables all service models
      hubspot_ticket_deal_enabled: true

    (Optional) Step 5: Additional configurations

    Configure email metrics

    This package allows you to specify which email metrics (total count and total unique count) you would like to be calculated for specified fields within the hubspot__email_campaigns model. By default, the email_metrics variable below includes all the shown fields. If you would like to remove any field metrics from the final model, you may copy and paste the below snippet within your root dbt_project.yml and remove any fields you want to be ignored in the final model.

    vars:
      email_metrics: ['bounces',      #Remove if you do not want metrics in final model.
                      'clicks',       #Remove if you do not want metrics in final model.
                      'deferrals',    #Remove if you do not want metrics in final model.
                      'deliveries',   #Remove if you do not want metrics in final model.
                      'drops',        #Remove if you do not want metrics in final model.
                      'forwards',     #Remove if you do not want metrics in final model.
                      'opens',        #Remove if you do not want metrics in final model.
                      'prints',       #Remove if you do not want metrics in final model.
                      'spam_reports', #Remove if you do not want metrics in final model.
                      'unsubscribes'  #Remove if you do not want metrics in final model.
                      ]

    Include passthrough columns

    This package includes all source columns defined in the macros folder. We highly recommend including custom fields in this package as models now only bring in a few fields for the company, contact, deal, and ticket tables. You can add more columns using our pass-through column variables. These variables allow for the pass-through fields to be aliased (alias) and casted (transform_sql) if desired, but not required. Datatype casting is configured via a sql snippet within the transform_sql key. You may add the desired sql while omitting the as field_name at the end and your custom pass-though fields will be casted accordingly. Use the below format for declaring the respective pass-through variables in your root dbt_project.yml.

    vars:
      hubspot__deal_pass_through_columns:
        - name:           "property_field_new_id"
          alias:          "new_name_for_this_field_id"
          transform_sql:  "cast(new_name_for_this_field as int64)"
        - name:           "this_other_field"
          transform_sql:  "cast(this_other_field as string)"
      hubspot__contact_pass_through_columns:
        - name:           "wow_i_can_add_all_my_custom_fields"
          alias:          "best_field"
      hubspot__company_pass_through_columns:
        - name:           "this_is_radical"
          alias:          "radical_field"
          transform_sql:  "cast(radical_field as string)"
      hubspot__ticket_pass_through_columns:
        - name:           "property_mmm"
          alias:          "mmm"
        - name:           "property_bop"
          alias:          "bop"

    Alternatively, if you would like to simply pass through all columns in the above four tables, add the following configuration to your dbt_project.yml. Note that this will override any hubspot__[table_name]_pass_through_columns variables.

    vars:
      hubspot__pass_through_all_columns: true # default is false

    Adding property label

    For property_hs_* columns, you can enable the corresponding, human-readable property_option.label to be included in the staging models.

    Important
    • You must have sources property and property_option enabled to enable labels. By default, these sources are enabled.
    • You CANNOT enable labels if using hubspot__pass_through_all_columns: true.
    • We recommend being selective with the label columns you add. As you add more label columns, your run time will increase due to the underlying logic requirements.

    To enable labels for a given property, set the property attribute add_property_label: true, using the below format.

    vars:
      hubspot__ticket_pass_through_columns:
        - name: "property_hs_fieldname"
          alias: "fieldname"
          add_property_label: true

    Alternatively, you can enable labels for all passthrough properties by using variable hubspot__enable_all_property_labels: true, formatted like the below example.

    vars:
      hubspot__enable_all_property_labels: true
      hubspot__ticket_pass_through_columns:
        - name: "property_hs_fieldname1"
        - name: "property_hs_fieldname2"

    Including calculated fields

    This package also provides the ability to pass calculated fields through to the company, contact, deal, and ticket staging models. If you would like to add a calculated field to any of the mentioned staging models, you may configure the respective hubspot__[table_name]_calculated_fields variables with the name of the field you would like to create, and the transform_sql which will be the actual calculation that will make up the calculated field.

    vars:
      hubspot__deal_calculated_fields:
        - name:          "deal_calculated_field"
          transform_sql: "existing_field * other_field"
      hubspot__company_calculated_fields:
        - name:          "company_calculated_field"
          transform_sql: "concat(name_field, '_company_name')"
      hubspot__contact_calculated_fields:
        - name:          "contact_calculated_field"
          transform_sql: "contact_revenue - contact_expense"
      hubspot__ticket_calculated_fields:
        - name:          "ticket_calculated_field"
          transform_sql: "total_field / other_total_field"

    Filtering email events

    When leveraging email events, HubSpot customers may take advantage of filtering out specified email events. These filtered email events are present within the stg_hubspot__email_events model and are identified by the is_filtered_event boolean field. By default, these events are included in the staging and downstream models generated from this package. However, if you wish to remove these filtered events you may do so by setting the hubspot_using_all_email_events variable to false. See below for exact configurations you may provide in your dbt_project.yml file:

    vars:
      hubspot_using_all_email_events: false # True by default

    Daily ticket history

    The hubspot__daily_ticket_history model is disabled by default, but will materialize if hubspot_service_enabled is set to true. See additional configurations for this model below.

    Note: hubspot__daily_ticket_history and its parent intermediate models are incremental. After making any of the below configurations, you will need to run a full refresh.

    Tracking ticket properties

    By default, hubspot__daily_ticket_history will track each ticket’s state, pipeline, and pipeline stage and pivot these properties into columns. However, any property from the source TICKET_PROPERTY_HISTORY table can be tracked and pivoted out into columns. To add other properties to this end model, add the following configuration to your dbt_project.yml file:

    vars:
      hubspot__ticket_property_history_columns:
        - the
        - list
        - of 
        - property
        - names
    Extending ticket history past closing date

    This package will create a row in hubspot__daily_ticket_history for each day that a ticket is open, starting at its creation date. A Hubspot ticket can be altered after being closed, so its properties can change after this date.

    By default, the package will track a ticket up to its closing date (or the current date if still open). To capture post-closure changes, you may want to extend a ticket’s history past the close date. To do so, add the following configuration to your root dbt_project.yml file:

    vars:
      hubspot:
        ticket_history_extension_days: integer_number_of_days # default = 0

    Changing the Build Schema

    By default this package will build the HubSpot staging models within a schema titled (<target_schema> + _stg_hubspot) and HubSpot final models within a schema titled (<target_schema> + hubspot) in your target database. If this is not where you would like your modeled HubSpot data to be written to, add the following configuration to your root dbt_project.yml file:

    models:
        hubspot:
          +schema: my_new_schema_name # Leave +schema: blank to use the default target_schema.
          staging:
            +schema: my_new_schema_name # Leave +schema: blank to use the default target_schema.

    Change the source table references

    If an individual source table has a different name than the package expects, add the table name as it appears in your destination to the respective variable:

    IMPORTANT: See this project’s dbt_project.yml variable declarations to see the expected names.

    vars:
        hubspot_<default_source_table_name>_identifier: your_table_name

    (Optional) Step 6: Orchestrate your models with Fivetran Transformations for dbt Coreβ„’

    Expand for details

    Fivetran offers the ability for you to orchestrate your dbt project through Fivetran Transformations for dbt Coreβ„’. Learn how to set up your project for orchestration through Fivetran in our Transformations for dbt Coreβ„’ setup guides.

    Does this package have dependencies?

    This dbt package is dependent on the following dbt packages. These dependencies are installed by default within this package. For more information on the following packages, refer to the dbt hub site.

    IMPORTANT: If you have any of these dependent packages in your own packages.yml file, we highly recommend that you remove them from your root packages.yml to avoid package version conflicts.

    packages:
        - package: fivetran/fivetran_utils
          version: [">=0.4.0", "<0.5.0"]
    
        - package: dbt-labs/dbt_utils
          version: [">=1.0.0", "<2.0.0"]
    
        - package: dbt-labs/spark_utils
          version: [">=0.3.0", "<0.4.0"]

    How is this package maintained and can I contribute?

    Package Maintenance

    The Fivetran team maintaining this package only maintains the latest version of the package. We highly recommend you stay consistent with the latest version of the package and refer to the CHANGELOG and release notes for more information on changes across versions.

    Contributions

    A small team of analytics engineers at Fivetran develops these dbt packages. However, the packages are made better by community contributions.

    We highly encourage and welcome contributions to this package. Check out this dbt Discourse article on the best workflow for contributing to a package.

    Are there any resources available?

    • If you have questions or want to reach out for help, see the GitHub Issue section to find the right avenue of support for you.
    • If you would like to provide feedback to the dbt package team at Fivetran or would like to request a new dbt package, fill out our Feedback Form.
    Visit original content creator repository https://github.com/fivetran/dbt_hubspot
  • ProtosAI

    ProtosAI

    A Study in Artificial Intelligence

    This project consist of a collection of scripts that explore capabilities provided by neural networks (NN), generative pre-trained transformers (GPT) and large language models (LLM). Most of these scripts are based on models hosted by Hugging Face (https://huggingface.co/).

    Google Colab Example: ProtosAI.ipynb

    Classes:

    Setup

    Setup required for these scripts:

    # Requirements
    pip install transformers datasets
    pip install torch

    Note that during the fist run, the library will download the required model to process the inputs.

    Sentiment Analysis

    The sentiment.py script prompts the user for a line of text and uses a model to determine the sentiment of the text (positive, neutral or negative).

    Enter some text (or empty to end): I love you.
    Sentiment score: [{'label': 'positive', 'score': 0.9286843538284302}]
    
    Enter some text (or empty to end): I am sad.
    Sentiment score: [{'label': 'negative', 'score': 0.7978498935699463}]
    
    Enter some text (or empty to end): I hate dirty pots.
    Sentiment score: [{'label': 'negative', 'score': 0.9309694170951843}]
    
    Enter some text (or empty to end): Don't move!
    Sentiment score: [{'label': 'neutral', 'score': 0.6040788292884827}]
    

    Summarization

    The summary.py script takes a text file input and uses the summarization model to produce a single paragraph summary.

    $ python3 summary.py pottery.txt                                     
    Loading transformer...
    
    Reading pottery.txt...
    Number of lines: 14
    Number of words: 566
    Number of characters: 3416
    
    Summarizing...
    Text:  The key to becoming a great artist, writer, musician, etc., is to keep creating!
    Keep drawing, keep writing, keep playing! Quality emerges from the quantity of practice
    and continuous learning that makes them more perfect . The prize of perfection comes by
    delivering and learning, says Jason Cox .
    Number of lines: 1
    Number of words: 49
    Number of characters: 299
    

    Transcribe

    The transcribe.py script takes an audio file (mp3 or wav file) and uses a speech model to produce a basic text transcription. A additional tool record.py will use your laptops microphone to record your dictation into audio.wav that can be used by transcribe.py.

    # Requirements - MacOS
    brew install ffmpeg   
    
    # Requirements - Ubuntu Linux
    sudo apt install ffmpeg   
    $ python3 transcribe.py test.wav
    
    Loading model...
    
    Transcribing test.wav...
    HELLO THIS IS A TEST
    

    Text to Speech

    The speech.py script converts a text string into an audio file. The script requires additional libraries:

    # Requirements MacOS
    brew install portaudio  
    
    # Requirements Ubuntu Linux
    sudo apt install portaudio19-dev
    sudo apt install python3-pyaudio
    
    pip install espnet torchaudio sentencepiece pyaudio
    $ python3 speech.py
    
    Loading models...
    
    Converting text to speech...
    
    Writing to audio.wav...
    
    Speaking: Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
    
    output.mp4

    Speech to Text

    The advanced OpenAI Whisper model can be used to do transcription. Sample scripts are located in the whisper folder.

    Voice Cloning

    There are several models and kits emerging that allow you to build your own speech model based on sample speech. The TTS python package is one, by coqui-ai https://github.com/coqui-ai/TTS

    # Install TTS
    pip install TTS

    Example (TBD)

    from TTS.api import TTS
    tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True)
    
    # generate speech by cloning a voice using default settings
    tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                    file_path="output.wav",
                    speaker_wav="/path/to/target/speaker.wav",
                    language="en")

    Handwriting to Text

    The handwriting.py script converts an image of a handwritten single line of text to a string of text.

    # Requirements
    pip install image

    test.png

    $ python3 handwriting.py test.png
    Converting image to text: test.png
    
    Loading transformer...
     * microsoft/trocr-base-handwritten
    
    Analyzing handwriting from test.png...
    
    Resulting text:
    This is a test-Can you read this?
    

    Large Language Models (LLM)

    The exploration of different LLMs is located in the llm folder. The goal of this section is to explore the different LLM models, specifically related to building, training, tuning and using these models.

    • BiGram – This experiment uses an introductory training model based on the “Let’s build a GPT from scratch” video by Andrej Karpathy.
    • nanoGPT – Similar to above but using the tiny GPT, Andrej Karpathy’s nanoGPT
    • LLaMA – The llama.cpp project’s goal is to run LLaMA models using integer quantization to allow the use of these LLMs on local small scale computers like a MacBook.

    OpenAI Test

    The openai.py script prompts the OpenAI gpt-3.5 model and prints the response.

    # Requirements
    pip install openai
    
    # Test
    $ python3 gpt.py
    What do you want to ask? Can you say something to inspire engineers?
    
    Answer: {
      "choices": [
        {
          "finish_reason": "stop",
          "index": 0,
          "message": {
            "content": "Of course! Here's a quote to inspire engineers:\n\n\"Engineering is not only about creating solutions, it's about creating a better world. Every time you solve a problem, you make the world a little bit better.\" - Unknown\n\nAs an engineer, you have the power to make a positive impact on society through your work. Whether you're designing new technologies, improving existing systems, or solving complex problems, your contributions are essential to advancing our world. So keep pushing the boundaries of what's possible, and never forget the impact that your work can have on the world around you.",
            "role": "assistant"
          }
        }
      ],
      "created": 1685856679,
      "id": "chatcmpl-7Nach0z2sJQ5FzZOVl6jZWPU4O6zV",
      "model": "gpt-3.5-turbo-0301",
      "object": "chat.completion",
      "usage": {
        "completion_tokens": 117,
        "prompt_tokens": 26,
        "total_tokens": 143
      }
    }

    GPT-2 Text Generation

    The gpt-2.py script uses the gpt2-xl model to generate test based on a prompt.

    $ python3 gpt-2.py   
    [{'generated_text': "Hello, I'm a language model, but what I do you need to know isn't that hard. But if you want to understand us, you"}, {'generated_text': "Hello, I'm a language model, this is my first commit and I'd like to get some feedback to see if I understand this commit.\n"}, {'generated_text': "Hello, I'm a language model, and I'll guide you on your journey!\n\nLet's get to it.\n\nBefore we start"}, {'generated_text': 'Hello, I\'m a language model, not a developer." If everything you\'re learning about code is through books, you\'ll never get to know about'}, {'generated_text': 'Hello, I\'m a language model, please tell me what you think!" – I started out on this track, and now I am doing a lot'}]
    Visit original content creator repository https://github.com/jasonacox/ProtosAI
  • ngshowon

    Angular Directive ngShowOn

    A simple angular directive to show/hide element according to the size of the window.

    Getting Started

    Installing

    You can install it via bower with :

    bower install ngshowon
    

    or just clone the project on github :

    https://github.com/PaulRosset/ngshowon.git
    

    Prerequisites

    Once download, Include the file :

    ...
      <script src="https://github.com/PaulRosset/path/to/the/file/ngShowOn.min.js"></script>
    ...
    

    and integrate it in your main module angular as dependencies :

    var myApp = angular.module('myApp', ['ngShowOn']);
    

    Example of use

    Now, you can use as you want on any html element like this :

    <h1 ng-show-on="tablet">Hello World</h1>
    

    In this example the element will be shown only for tablet device.

    Another example for mobile device

    <div ng-show-on="mobile" class="test">
        <p>Hello here</p>
    </div>
    

    For the moment, the arguments available are :

    • tablet
    • desktop
    • mobile

    By default, if no value are passed, the element is hidden on all devices.

    Tips

    This is my first time that I post code on github, so I would appreciate if you do remark about the code.
    Thank’s in advance.

    Authors

    • Paul Rosset

    Visit original content creator repository
    https://github.com/PaulRosset/ngshowon

  • deepl

    Rust API client for deepl

    The DeepL API provides programmatic access to DeepL’s machine translation technology.

    For more information, please visit https://www.deepl.com/contact-us

    Overview

    This API client was generated by the OpenAPI Generator project. By using the openapi-spec from a remote server, you can easily generate an API client.

    • API version: 2.7.0
    • Package version: 2.7.0
    • Build package: org.openapitools.codegen.languages.RustClientCodegen

    Installation

    Put the package under your project folder in a directory named deepl and add the following to Cargo.toml under [dependencies]:

    cargo add deepl-openapi
    

    Documentation for API Endpoints

    All URIs are relative to https://api.deepl.com/v2

    Class Method HTTP request Description
    ManageGlossariesApi create_glossary POST /glossaries Create a Glossary
    ManageGlossariesApi delete_glossary DELETE /glossaries/{glossary_id} Delete a Glossary
    ManageGlossariesApi get_glossary GET /glossaries/{glossary_id} Retrieve Glossary Details
    ManageGlossariesApi get_glossary_entries GET /glossaries/{glossary_id}/entries Retrieve Glossary Entries
    ManageGlossariesApi list_glossaries GET /glossaries List all Glossaries
    ManageGlossariesApi list_glossary_languages GET /glossary-language-pairs List Language Pairs Supported by Glossaries
    MetaInformationApi get_languages GET /languages Retrieve Supported Languages
    MetaInformationApi get_usage GET /usage Check Usage and Limits
    TranslateDocumentsApi download_document POST /document/{document_id}/result Download Translated Document
    TranslateDocumentsApi get_document_status POST /document/{document_id} Check Document Status
    TranslateDocumentsApi translate_document POST /document Upload and Translate a Document
    TranslateTextApi translate_text POST /translate Request Translation

    Documentation For Models

    To get access to the crate’s generated documentation, use:

    cargo doc --open
    

    Visit original content creator repository
    https://github.com/StrayLittlePunk/deepl

  • Earth-Beauty

    Earth Beauty

    Earth Beauty ia a term used for natural beauty which is in its original or inherent form. The natural beauty need not be achieved by doing or wearing something. In this website you can find ways to enrich your skin by natural Earth.

    Project overview

    This website is created to change the perspective of young people where natural beauty is born naturally and everyone is beautiful. Beauty is a relative word. It changes with the country,race,caste,color and creed.

    Our initiative is to create awarness amoung people to admire themselves.This program is unique in itself where we change the meaning of beauty which is not just by make-up,plastic surgery.

    we provide different natural techniques so that our customers can develop a healthy lifestyle.

    Azure Technology Used:

    • Static Web App
    • Visual Studio Code

    static Web app

    Visual studio code Our website include :

    • home
    • gallery
    • about us
    • services
    • our blog
    • our branches
    • contact

    Dont think much and join our program and build a healthy life and gain natural beauty.

    my project link

    https://victorious-sand-06fe06e10.1.azurestaticapps.net

    my github project link

    https://shaikayesha1.github.io/Earth-Beauty/

    My Project Demo Link

    https://youtu.be/ASo7gjmXr_M

    Website Overview

    Home Page

    website home page This above page gives you a detail description of our program and different earthly materials used.

    Services

    website home page

    Our program provides different organic or earthen materials in providing beauty.

    Contact US

    website home page Its never late to think of developing different ways to natural beauty.

    Visit original content creator repository https://github.com/Shaikayesha1/Earth-Beauty
  • markdown-it-directive-webcomponents

    markdown-it-directive-webcomponents

    δΈ­ζ–‡ζŒ‡ε—

    This plugin can convert a markdown directive (Generic directives/plugins syntax spec) to a web component (WebComponents). It needs markdown-it-directive and markdown-it as dependencies.

    Install

    npm i markdown-it-directive-webcomponents

    API

    const md = require('markdown-it')()
      .use(require('markdown-it-directive-webcomponents'), {
        components: [
          {
            present: 'both',
            name: 'directive-name',
            tag: 'tag-name',
            allowedAttrs: [ 'inline', 'src', 'title', /^prefix/ ],
            destLinkName: 'my-link-name',
            destStringName: 'my-string-name',
            parseInner: true
          },
        ]
      });
    • components: Write conversion rules in this array
      • present: Which type of directive to parse. Values: inline, block, both.
      • name: The name of the directive
      • tag: The tag name of the converted component
      • allowedAttrs: Allowed attribute names. If set as an array, elements in the array can be a String or a RegEx. If not set, allow any name. (has security issues, not recommended)
      • destLinkName: Attribute name when converting link-type data in link destinations (ie. the content in ()) to attributes. src by default
      • destStringName: Attribute name when converting string-type data in link destinations to attributes. title by default
      • parseInner: Whether to continue to parse the content as Markdown or not. Bool type. if it is false, the content will be unescaped and written in the output (html < > etc. will still be escaped).

    DOMPurify is recommended as a security backup.

    Here are three directive formats that can be recognized:

    text before :directive-name[content](/link "destination" /another "one"){.class #id name=value name="string!"} text after
    
    :: directive-name [inline content] (/link "destination" /another "one") {.class #id name=value name="string!"} content title ::
    
    ::: directive-name [inline content] (/link "destination" /another "one") {.class #id name=value name="string!"} content title ::
    content
    :::
    

    Will be converted to:

    <p>text before <tag-name class="class" id="id" name="value" src="/link" title="destination" inline="">content</tag-name> text after</p>
    
    <tag-name class="class" id="id" name="value" src="/link" title="destination">inline content</tag-name>
    
    <tag-name class="class" id="id" name="value" src="/link" title="destination">
    <p>content</p>
    </tag-name>

    In the conversion process, link-type value which in () will add to src attribute, and string-type value will add to title attribute. class‘s values will be merged together and other attributes will pick the first value.

    Block-level directive, if it is the third case, it will ignore the inline content and content title, and parse the content as block; if the second case, if there is, then use inline content otherwise use content title as content and parse the content as inline.

    Example

    const md = require('markdown-it')()
      .use(require('markdown-it-directive-webcomponents'), {
        components: [
          {
            present: 'both',
            name: 'directive-name',
            tag: 'tag-name',
            allowedAttrs: [ 'inline', 'src', 'title', /^prefix/ ],
            parseInner: true
          },
          {
            present: 'both',
            name: 'another-directive',
            tag: 'another-tag',
            allowedAttrs: [ 'inline', 'src', 'title', /^prefix/ ],
            parseInner: false
          },
        ]
      });
    
    console.dir(md.render(`
    text before :directive-name[content](/link "destination" /another "one"){.class #id name=value name="string!"} text after
    
    :: directive-name [inline content] (/link "destination" /another "one") {.class #id name=value name="string!"} content title ::
    
    ::: directive-name [inline content] (/link "destination" /another "one") {.class #id name=value name="string!"} content title ::
    content
    :::
    
    ::: another-directive
    content
    \\:::
    :::`));
    
    /* output
    
    <p>text before <tag-name class="class" id="id" name="value" src="https://github.com/link" title="destination" inline="">content</tag-name> text after</p>
    <tag-name class="class" id="id" name="value" src="https://github.com/link" title="destination">inline content</tag-name>
    <tag-name class="class" id="id" name="value" src="https://github.com/link" title="destination">
    <p>content</p>
    </tag-name>
    <another-tag>
    content
    :::
    </another-tag>
    
    */

    More examples can be found in test.js.

    License

    MIT

    Copyright (c) 2020, lookas

    Visit original content creator repository
    https://github.com/hilookas/markdown-it-directive-webcomponents

  • blue-alliance-api-java-library

    The Blue Alliance API Java Library Build Status

    Java client library to retrieve data from The Blue Alliance using TBA API v3

    Full Javadoc documentation can be found here

    Usage

    Begin by creating a TBA object with your Read TBA API Key. This can be found or generated on your account dashboard.

    String authKey = // your TBA API read key
    TBA tba = new TBA(authKey);
    

    Regular Usage

    The library allows access to almost all of the calls in The Blue Alliance API v3 documentation.

    They are grouped into requests with team, event, district, or match parameters, and you will need to use the teamRequest, eventRequest, or matchRequest instance variables found in the TBA class.

    Here is an example of retrieving an array of teams in the FIRST Mid-Atlantic district in 2017:

    Team[] midAtlanticTeams = tba.districtRequest.getTeams("2017mar");
    

    A list of request methods for each request object can be found here.

    Advanced Usage

    If you want to utilize the If-Modified-Since and Last-Modified headers, you will need to make a direct URL request with the getDataTBA(String urlDirectory, String ifModifiedSince) method in the DataRequest class. This will return an APIResponse object with JSON data, the HTTP response code, and the Last-Modified header.

    The JSON data will need to be deserialized into an object model with a method in the Deserializer class before being used.

    Here is an example of fetching the Match objects for the 2017 Mount Olive District Event, if they have been updated.

    APIResponse resp = tba.dataRequest.getDataTBA("/event/2017njfla/matches");
    String lastModified = resp.getLastModified();
    Match[] matchList = Deserializer.toMatchArray(resp.getJson());
    
    // Execute the following code block after waiting or in a separate method
    
    resp = tba.dataRequest.getDataTBA("/event/2017njfla/matches", lastModified);
    
    if(resp.getResponseCode()!=304){ // HTTP code 304 indicates no change
    	teamList = Deserializer.jsonToTeamArray(resp.getJson());
    	lastModified = resp.getLastModified();
    	}
    }
    

    Models

    A list of object model classes and their getter methods for instance variables can be found here. Please note that the master branch of this repository contains updated object models for the current season’s code, and object models for past seasons can be found in other branches.

    Dependencies

    You will need Gson to use the released compiled TBA API JAR file in your project. Gson can be installed with Maven, via a JAR file, or with Gradle if you include the following in your build.gradle

    dependencies {
    	compile 'com.google.code.gson:gson:2.2.4'
    }
    

    Note that you will need Gradle to compile this repository’s source code if you do not get Gson.

    Contact

    Feel free to contact Spencer Ng at sng1488 (at) gmail (dot) com or create a pull request if you have any questions, fixes, or suggestions.

    Visit original content creator repository https://github.com/RaiderRobotix/blue-alliance-api-java-library