Moozonian
Web Images Developer News Books Maps Shopping Moo-AI
Showing results for study Vector Vector PNG Vector
GitHub Repo https://github.com/ZooaDeV/ZooaDev

ZooaDeV/ZooaDev

### Hi, I'm Zooa, currently 16 years old student in computer science, on this github I share my small open source projects, have fun browsing it - :telescope: I’m currently working on [EarthBot](https://earthbot.xyz) - :mailbox: How to reach me: [Discord](https://discord.gg/j9zzjnA) / [Mail](contact@dior.xyz) - :computer: I use [Webstorm](https://www.jetbrains.com/fr-fr/webstorm/) IDE powered by [JetBrainsIDE](https://www.jetbrains.com/) - :french_bread: I live and study in Lyon (France) ## :earth_africa: Programming languages: ![TypeScript](https://img.shields.io/badge/typescript-%23007ACC.svg?style=for-the-badge&logo=typescript&logoColor=white) ![JavaScript](https://img.shields.io/badge/javascript-%23323330.svg?style=for-the-badge&logo=javascript&logoColor=%23F7DF1E) ![Java](https://img.shields.io/badge/java-%23ED8B00.svg?style=for-the-badge&logo=java&logoColor=white) ![HTML5](https://img.shields.io/badge/html5-%23E34F26.svg?style=for-the-badge&logo=html5&logoColor=white) ![CSS3](https://img.shields.io/badge/css3-%231572B6.svg?style=for-the-badge&logo=css3&logoColor=white) ![C++](https://img.shields.io/badge/c++-%2300599C.svg?style=for-the-badge&logo=c%2B%2B&logoColor=white) ![Python](https://img.shields.io/badge/Python-3776AB?style=for-the-badge&logo=python&logoColor=white) ![PHP](https://img.shields.io/badge/php-%23777BB4.svg?style=for-the-badge&logo=php&logoColor=white) ## :computer: FrameWorks: ### Css: ![TailwindCSS](https://img.shields.io/badge/Tailwind_CSS-38B2AC?style=for-the-badge&logo=tailwind-css&logoColor=white) ![Bootstrap](https://img.shields.io/badge/Bootstrap-563D7C?style=for-the-badge&logo=bootstrap&logoColor=white) ### Js: ![VueJS](https://img.shields.io/badge/Vue.js-35495E?style=for-the-badge&logo=vuedotjs&logoColor=4FC08D) ![ChartJS](https://img.shields.io/badge/Chart.js-FF6384?style=for-the-badge&logo=chartdotjs&logoColor=white) [01:24] ![ThreeJS](https://img.shields.io/badge/ThreeJs-black?style=for-the-badge&logo=three.js&logoColor=white) ![NestJS](https://img.shields.io/badge/nestjs-%23E0234E.svg?style=for-the-badge&logo=nestjs&logoColor=white) ## :gear: Programming Tools: [<img alt="github" width="50px" src="https://raw.githubusercontent.com/coderjojo/coderjojo/master/img/github.svg"/>](https://github.com) [<img alt="git" width="50px" src="https://iconape.com/wp-content/png_logo_vector/git-icon.png"/>](https://git-scm.com/) [<img alt="webstorm" width="60px" src="https://cdn.freebiesupply.com/logos/thumbs/2x/webstorm-icon-logo.png"/>](https://www.jetbrains.com/webstorm/) [<img alt="clion" width="50px" src="https://cdn.worldvectorlogo.com/logos/clion-1.svg"/>](https://www.jetbrains.com/clion/) [<img alt="nodeJS" width="50px" src="https://cdn.iconscout.com/icon/free/png-512/node-js-1-1174935.png"/>](https://nodejs.org/en/) ## :package: Databases: ![pgSQL](https://img.shields.io/badge/PostgreSQL-316192?style=for-the-badge&logo=postgresql&logoColor=white) ![MongoDB](https://img.shields.io/badge/MongoDB-4EA94B?style=for-the-badge&logo=mongodb&logoColor=white) ![SQLITE](https://img.shields.io/badge/SQLite-07405E?style=for-the-badge&logo=sqlite&logoColor=white) ![Redis](https://img.shields.io/badge/redis-%23DD0031.svg?&style=for-the-badge&logo=redis&logoColor=white) ![MySQL](https://img.shields.io/badge/mysql-%2300f.svg?style=for-the-badge&logo=mysql&logoColor=white) ![MariaDB](https://img.shields.io/badge/MariaDB-003545?style=for-the-badge&logo=mariadb&logoColor=white) ## :wrench: OS : ![Mac OS](https://img.shields.io/badge/mac%20os-000000?style=for-the-badge&logo=macos&logoColor=F0F0F0) ![Windows](https://img.shields.io/badge/Windows-0078D6?style=for-the-badge&logo=windows&logoColor=white) ![Ios](https://img.shields.io/badge/iOS-000000?style=for-the-badge&logo=ios&logoColor=white) [01:24] ![Debian](https://img.shields.io/badge/Debian-A81D33?style=for-the-badge&logo=debian&logoColor=white) ## :triangular_flag_on_post: Projects: | Name | Link | |------------------|-----------------------------------| | EarthBot | https://earthbot.xyz | | SoundBot | https://discord.gg/aVadNse | <p align="center"> Visitor count<br> <img src="https://profile-counter.glitch.me/Derpinou/count.svg" /> </p> <img align="left" alt="My Github Stats" src="https://github-readme-stats.vercel.app/api?username=ZooaDeV&show_icons=true&hide_border=true" /> <img align="left" alt="My Top languages used" src="https://github-readme-stats.vercel.app/api TU AS REÇU UNE INVITATION, MAIS... Lien d'invitation invalide Demande une nouvelle invitation à Prince D1or ! Mentionner [01:24]
GitHub Repo https://github.com/4fox123/LATEST-TRENDS-in-UI-UX---4Fox-Solutions

4fox123/LATEST-TRENDS-in-UI-UX---4Fox-Solutions

COVID has certainly changed the dynamics of the IT industry. As a businessman, one needs to cohere to the latest UI UX trends. UI/UX trends are altered based on the varying taste of the user. A user visits thousands of websites every day and as a businessman, everyone is trying to grab the attention of the visitor. Standing out from the competitors is hard to crack nowadays. The online platform that grabs the attention is the one that scores the game. A good design surely adds value and makes your customers visit you again. Though, the design is changing all the time. Each year, UI trends change, new features are introduced, and out-dated versions lose their way. So, you need to stay updated and keep your business up to date as well. In this article, we study the latest UI/UX trends that help you in your business growth. So let's dive into What is UI/UX? UI/UX is becoming a popular term and it’s also becoming better known in the industry. UX or User Experience essentially understands the customer their needs and designing an experience for the customer according to that which covers various aspects such as tech, such as understanding needs, what they see, what they experience with all their senses is what user experience is all about. A user experience with your site is known as User Experience. A UX/UI Designer is responsible for creating user-friendly interfaces and helps the clients to better understand the use of technological products UI or user interface is one part of the user experience, it’s one of the touch points the customer interfaces with, that is what they see and that’s what they touch now there is on touch devices or if it’s a computer then it’s something that you use online but that interface that communicated the brand or the service to the customer is the user interface and because that is a very important way or the very useful way to communicate with the customer, it plays a huge role in the communication of the service or the product to the customer. User Experience/ User Interface Design Trends Minimalism If we talk in terms of Design, then less is always more, a golden rule of website design. Limiting the number of colors, trying different proportions and compositions are good. It's because an average user sees many discount ads and gets constant notifications and the designer always try to simplify the way they present information. Simplified UX Avoid inserting extra steps which users found non-mandatory. Try to minimize the elements. Now the latest UX is Simplified registration and sign-in. Users don't want to remember extra passwords, so making use of their phone number as a password is a bit simplified. If you can omit unnecessary steps, do it. If you can omit some fields in the form, do it. Blurred, colorful background User Interface design helps to make the website more stylish, elegant, and eye-soothing. In earlier use, 2-3 colors in linear gradients were in trend but now it goes up to 10. In short, an overlay is trending now. However, you need to be careful while playing with the gradients. Neomorphism Neomorphism is gaining popularity because of its subtle yet original appearance that merges skeuomorphism with flat design. At present, the main purpose of products is to produce different user interface items utilizing geomorphic. Unique and absurd 2D illustrations 2D illustrations contribute to several aspects of design in production, including Environment design, Prop design, PreVis design, drawers, clean-up, rotations, and color design. Illustrations stay on top of user interface trends. It is getting fancier with time. The picture quality is in SVG format because like other formats such as PNG, GIF, and JPEG the picture quality is damaged when we increase the screen resolution and this is not the issue in SVG format. Because the vector format can be increased and decreased with no loss in quality. (VUI )Voice User Interface Users may use voice commands to engage with a UI or to talk. A voice-activated interface prevents consumers from using the interface. UX design teams compete with the latest upgrades and advances in this industry more than ever before. VUI is widely used in apps for translation. It hears in your language and translates into the language you want. ALEXA, SIRI is the best example of a voice user interface. With VUI you can easily communicate with people who do not know your language. Mobile-first approach Never ignore mobile accessibility as almost a 5.27billion people use a mobile phone and the chances of using social media platforms are 90%. So what if you ignore mobile visibility and more focus on the desktop appearance of your website? In this case, you lose a large amount of audience from your end. So always use the mobile approach in mind and create and design your website accordingly. Icons Icons are generally visually expressing objects, actions, and ideas. To communicate with the user we need different UI/UX. There are different picture representations of icons that are visually appealing. Conveying meaning in less space and creative ways is more powerful. Choose icons from the same family as they are the same size. It is also vital to understand that not all UX design trends are required for a product or website. The usefulness could only be complicated. We would rather keep up with the trends and only use them if they match the needs of your users and are likely to be working for your company. Synopsis: In 2021, design is something less complicated, more diverse, delightful, and satisfactory. Make sure you adhere to all these trends for reaching out to the million customers out there. Hope these trends are helpful for you to know where we stand in UI/UX nowadays. UI/UX design is to help users achieve their goals. Always ensure the design is relevant and valuable for consumers.
GitHub Repo https://github.com/himanshub1007/Alzhimers-Disease-Prediction-Using-Deep-learning

himanshub1007/Alzhimers-Disease-Prediction-Using-Deep-learning

# AD-Prediction Convolutional Neural Networks for Alzheimer's Disease Prediction Using Brain MRI Image ## Abstract Alzheimers disease (AD) is characterized by severe memory loss and cognitive impairment. It associates with significant brain structure changes, which can be measured by magnetic resonance imaging (MRI) scan. The observable preclinical structure changes provides an opportunity for AD early detection using image classification tools, like convolutional neural network (CNN). However, currently most AD related studies were limited by sample size. Finding an efficient way to train image classifier on limited data is critical. In our project, we explored different transfer-learning methods based on CNN for AD prediction brain structure MRI image. We find that both pretrained 2D AlexNet with 2D-representation method and simple neural network with pretrained 3D autoencoder improved the prediction performance comparing to a deep CNN trained from scratch. The pretrained 2D AlexNet performed even better (**86%**) than the 3D CNN with autoencoder (**77%**). ## Method #### 1. Data In this project, we used public brain MRI data from **Alzheimers Disease Neuroimaging Initiative (ADNI)** Study. ADNI is an ongoing, multicenter cohort study, started from 2004. It focuses on understanding the diagnostic and predictive value of Alzheimers disease specific biomarkers. The ADNI study has three phases: ADNI1, ADNI-GO, and ADNI2. Both ADNI1 and ADNI2 recruited new AD patients and normal control as research participants. Our data included a total of 686 structure MRI scans from both ADNI1 and ADNI2 phases, with 310 AD cases and 376 normal controls. We randomly derived the total sample into training dataset (n = 519), validation dataset (n = 100), and testing dataset (n = 67). #### 2. Image preprocessing Image preprocessing were conducted using Statistical Parametric Mapping (SPM) software, version 12. The original MRI scans were first skull-stripped and segmented using segmentation algorithm based on 6-tissue probability mapping and then normalized to the International Consortium for Brain Mapping template of European brains using affine registration. Other configuration includes: bias, noise, and global intensity normalization. The standard preprocessing process output 3D image files with an uniform size of 121x145x121. Skull-stripping and normalization ensured the comparability between images by transforming the original brain image into a standard image space, so that same brain substructures can be aligned at same image coordinates for different participants. Diluted or enhanced intensity was used to compensate the structure changes. the In our project, we used both whole brain (including both grey matter and white matter) and grey matter only. #### 3. AlexNet and Transfer Learning Convolutional Neural Networks (CNN) are very similar to ordinary Neural Networks. A CNN consists of an input and an output layer, as well as multiple hidden layers. The hidden layers are either convolutional, pooling or fully connected. ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network. #### 3.1. AlexNet The net contains eight layers with weights; the first five are convolutional and the remaining three are fully connected. The overall architecture is shown in Figure 1. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. AlexNet maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution. The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (as shown in Figure1). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer. ![](images/f1.png) The first convolutional layer filters the 224x224x3 input image with 96 kernels of size 11x11x3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5x5x48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3x3x256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3x3x192 , and the fifth convolutional layer has 256 kernels of size 3x3x192. The fully-connected layers have 4096 neurons each. #### 3.2. Transfer Learning Training an entire Convolutional Network from scratch (with random initialization) is impractical[14] because it is relatively rare to have a dataset of sufficient size. An alternative is to pretrain a Conv-Net on a very large dataset (e.g. ImageNet), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. Typically, there are three major transfer learning scenarios: **ConvNet as fixed feature extractor:** We can take a ConvNet pretrained on ImageNet, and remove the last fully-connected layer, then treat the rest structure as a fixed feature extractor for the target dataset. In AlexNet, this would be a 4096-D vector. Usually, we call these features as CNN codes. Once we get these features, we can train a linear classifier (e.g. linear SVM or Softmax classifier) for our target dataset. **Fine-tuning the ConvNet:** Another idea is not only replace the last fully-connected layer in the classifier, but to also fine-tune the parameters of the pretrained network. Due to overfitting concerns, we can only fine-tune some higher-level part of the network. This suggestion is motivated by the observation that earlier features in a ConvNet contains more generic features (e.g. edge detectors or color blob detectors) that can be useful for many kind of tasks. But the later layer of the network becomes progressively more specific to the details of the classes contained in the original dataset. **Pretrained models:** The released pretrained model is usually the final ConvNet checkpoint. So it is common to see people use the network for fine-tuning. #### 4. 3D Autoencoder and Convolutional Neural Network We take a two-stage approach where we first train a 3D sparse autoencoder to learn filters for convolution operations, and then build a convolutional neural network whose first layer uses the filters learned with the autoencoder. ![](images/f2.png) #### 4.1. Sparse Autoencoder An autoencoder is a 3-layer neural network that is used to extract features from an input such as an image. Sparse representations can provide a simple interpretation of the input data in terms of a small number of \parts by extracting the structure hidden in the data. The autoencoder has an input layer, a hidden layer and an output layer, and the input and output layers have same number of units, while the hidden layer contains more units for a sparse and overcomplete representation. The encoder function maps input x to representation h, and the decoder function maps the representation h to the output x. In our problem, we extract 3D patches from scans as the input to the network. The decoder function aims to reconstruct the input form the hidden representation h. #### 4.2. 3D Convolutional Neural Network Training the 3D convolutional neural network(CNN) is the second stage. The CNN we use in this project has one convolutional layer, one pooling layer, two linear layers, and finally a log softmax layer. After training the sparse autoencoder, we take the weights and biases of the encoder from trained model, and use them a 3D filter of a 3D convolutional layer of the 1-layer convolutional neural network. Figure 2 shows the architecture of the network. #### 5. Tools In this project, we used Nibabel for MRI image processing and PyTorch Neural Networks implementation.