In the post below I talk a little about the certification and the resources I used to study. If you are also trying to take the exam, it might be helpful.
I did some research into what the introductory Azure certification should be for a developer and found that Azure Fundamentals (AZ-900) was the common ground for any professional trying to start obtaining Azure Certifications. Once you have it, you can decide which path you want to take based on your role: administration, data science, development, devops, or networking.
Candidates for this exam are technology professionals who want to demonstrate foundational knowledge of cloud concepts in general and Azure in particular. This certification is a common starting point in a journey towards a career in Azure.
You can see more details about each topic on AZ-900 study guide
The Scott Duffy course has a 50 questions test to validate your knowledge.
Microsoft also has the Practice Assessment for Exam AZ-900: Microsoft Azure Fundamentals: a test with 50 questions simulating the questions that can be in the exam. The nice thing about it is that you can check your answer and it will explaining the correct answer and point to the official documentation where you can learn more. My advice is to take some rounds of this practice assessment, because each time a small percentage of the questions are different from the previous one.
If you feel prepare to take the exam, you can schedule it with Pearson VUE.
Good luck!
]]>For this first post, the chosen one is the Azure Diagrams by Shane Ochotny, Harsha Konduri, and Bret Myers.
Azure Diagram is:
an architecture advisor tool that enables you to design diagrams in a collaborative manner and provides guidance on which Azure services can be integrated and when they should be utilized in your architecture.
It’s amazing how easy it is to create Azure related diagrams in this tool, it’s all there and you can even see which services integrate with each other. Each item in the diagram already has a link to the documentation page, also a link to the price estimate page.
Using it you have a much better flow for creating diagrams than using a generic tool that knows nothing about Azure services.
For now it is completely free to use, so just create you account at https://azurediagrams.com and enjoy it.
]]>An important and useful feature of the .resx files is that you can have one file for each culture that your application supports. For example, you can have one ErrorsMessages.resx
as the default to support en-US
(English - United States) messages and another with the name ErrorsMessages.pt-BR.resx
to support pt-BR
(Portuguese - Brazil) messages.
In this tutorial I will show you how to use a T4 Template to read the .resx file and generate a helper class to make very easy handle localized exception messages.
It is not the aim of this tutorial to go into details on how to work with T4 Templates, to do so read the documentation Code Generation and T4 Text Templates
Open your .NET project where you want to have your localized exception messages (probably a class library).
Add a Resources file
called ErrorMessages.resx
to the project:
This file will contain the English exception messages for your project. It is also a fallback, if there is no other match for the current UICulture, this one will be used.
Add these lines to the file:
Repeat the same process, but name the file as ErrorMessages-pt-BR.resx
.
This file will contain your projects Brazilian Portuguese exception messages.
Add these lines to the file:
PropertyIsRequired | A propriedade ‘{0}’ é obrigatória. |
Create a T4 template file in the same folder where you placed the .resx files, named Errors.tt
.
Now, copy the content below to your Errors.tt
:
If everything looks good, you should see the following code in the Errors.generated.cs
file (expand Errors.tt
in Solution Explorer to see it):
As you can see, two methods were generated within the Errors
class: PropertyIsRequired
and ProductHasMinWeight
.
Every time you add, update or remove lines to .resx files and want your Errors
class to be updated, just go to the menu “Build / Transform All T4 Templates”.
If you want to validate that you have followed the tutorial correctly so far, you can create the NUnit project and add this test fixture:
You’ve probably noticed that inside the .tt file there is a line: CHANGE THE VALUES BELOW TO THE VALUES CORRESPONDING TO YOUR PROJECT
.
In the 3 variables below this line you can customize the name of the resource file, the exception class that will be used (maybe you want to use a different exception or your project has a custom one) and the namespace that the Error
class will be in.
Finally it’s time to use the generated code.
In this case, it will be generate a InvalidOperationException
with the message:
A propriedade 'First Name' é obrigatória.
The property 'First Name' is required.
In this case, it will be generate a InvalidOperationException
with the message:
O produto 'Amazing Product' deve pesar pelo menos 1KG.
The product 'Amazing Product' must weigh at least 1KG.
Error.tt
checks for any {number}
within the message and generates the appropriate method arguments.
If you want to use this technique in an ASP.NET project remember to enable localization in the Startup.cs
or Program.cs
file:
In many projects, throwing exceptions is not possible or desired due to performance and design issues, in these cases you can use the same technique shown in this tutorial, but instead of throwing exceptions, you can adapt T4 to generate an error object business that can go through all layers of your architecture until reaching the UI or the serialization.
T4 is a powerful tool to help create an efficient development flow in the .NET stack. In cases like this tutorial, it drastically reduces the amount of code and improves the overall availability of the codebase.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
For this sixth post the chosen one is the Simulation-based layout optimization for multi-station assembly lines
.
Presents a novel approach for the automated 3D-layout planning of multi-station assembly lines. The planning method is based on a comprehensive model of the used production resources, including their geometry, kinematic properties, and general characteristics. Different resource types can be included in the planning system. A genetic algorithm generates and optimizes possible layouts for a line.
The optimization aims to minimize the line’s area and the costs for assembling the line while simultaneously optimizing the resources’ positioning to perform their tasks. The line’s cycle time is considered as a boundary condition. For the evaluation of different layout alternatives, a multi-body simulation is performed. A parameter study is used to set the algorithm’s parameters. Afterward, the algorithm is applied to three increasingly complex examples to validate and evaluate its functionality. The approach is promising for industrial applications as it allows the integration of various resource types and individualization of the optimization function.
Examples for the evaluation of positions in the workspace of an LBR iiwa 7 robot
The presented system is highly flexible and supports the positioning of multiple resource types. It is also open to integrating more resource variants or customizing the fitness function to reflect the individual user’s preferences accurately. The integration of more resource models and the modeling of more resource types would be the logical next step to improve the system further.
The final layout of the first three runs of the first example
Daria Leiber, David Eickholt, Anh-Tu Vuong, and Gunther Reinhart wrote the paper for Department of Mechanical Engineering, Institute for Machine Tools and Industrial Management, Technical University of Munich, Germany.
You can access the full paper directly at Journal of Intelligent Manufacturing.
]]>Trijam is a weekly game jam where your goal is to make something playable (and fun!) in just 3 hours.
The theme for this game jam was “Cursed Relics”.
This was the third Trijam that I participated with purpose to test my small game engine: Blazorame.
This quick-time event game is based on the famous Indiana Jones - Raiders of the Lost Ark scene where Indy needs to swap a bag with the Golden Idol to try not to activate the temple traps (we all know how it turns out).
The game’s objective is to hit any key at the exact moment that Indy should swap the items.
You can try to collect as many Cursed Idols as you want, but if you miss the right moment you will lose them all and will be cursed.
The final result is: The Cursed Idol.
The game can be played directly on the browser here: https://giacomelli.itch.io/the-cursed-idol.
If you liked the game, you can cast a vote for the game jam here: https://itch.io/jam/trijam-243/rate/2305047.
]]>
Abstract: This thesis describes the process of developing an interactive tool for visualization of the process of solving selected optimization problems using genetic algorithms. The theoretical part provides an introduction to the evolutionary algorithms, more specifically then focuses on genetic algorithms. This part also lists typical optimization problems that are commonly solved using genetic algorithms. The practical part describes all the important steps in creating new functional software. This includes the comparison of existing solutions, analysis of possibly suitable technologies and libraries, and software documentation. The implementation of the application, using the Blazor WebAssembly framework, is also described in this part. The interactive tool provides the visualizations of three typical optimization problems and the problem of 2D bitmap image generation. At the end, the thesis lists the techniques used for testing and documentation of the application. The final result of this thesis is a functional interactive web application. (paper)
Abstract: The purpose of the research is to identify effective methods of using genetic algorithms to determine the viability of moving objects and their systems, formalize and generalize the problem of optimizing their configuration and behavior using a genetic algorithm; development of a software system that would allow setting the conditions of a problem of this type in a formal and at the same time visual form, solving this problem and visualizing intermediate and final results. Development methods are based on Unity engine and GeneticSharp library. (paper)
Abstract: Steel latticed towers are widely used as supports to conductors on transmission lines. On such projects, it is usual to use the same structural solution repeatedly along the line, what allows to achieve significant cost reduction due to structure’s design optimization. This work proposes the development of an optimization methodology based on the Genetic Algorithms concept to reduce self-supporting tower’s self-weight through the variation of its body base and top dimensions. To execute the algorithm, it was necessary to implement a system composed of modules for geometric parameterization, determination of loads, structural analysis, and sizing accordingly to ASCE 10-15. A 25-bar truss benchmark was optimized, and the results showed that the proposed methodology is effective. Besides, the performance of the methodology to optimize a self-supporting 525 kV cat-face tower was evaluated. The comparison of the results with the project of an equivalent tower which is in operation in Brazil proved the effectiveness of the proposed methodology for self-supporting towers optimization. (paper)
Abstract: This work explores artificial intelligence for the game Unstable Unicorns. This game started on Kickstarter and over the years, the game creators released several expansions. This work aims to implement a game simulator for this game, analyze the game, and design the artificial intelligence for this game. First, we will analyze the game rules, game mechanics, and artificial intelligence in similar games. We implemented the game simulator as close as possible to the original rules. Afterward, we developed three different artificial intelligence algorithms. These are rule-based agents, Monte Carlo agents and evolutionary agents. Finally, we ran the experiments and comparison tests with the implemented agents. The best-performing agent is the evolutionary agent. It is quick with the best win rate. (paper)
Abstract: The article discusses the possibility of creating genetic algorithms in the C# programming language, in particular using the GeneticSharp library. The application of a genetic algorithm to find the two most distant points on a rectangular area is shown. (paper)
The GeneticDFA is not a scientific paper, but a very neat project available on GitHub that uses genetic programming to reverse engineer blackbox systems modeled in DFA form.
]]>Trijam is a weekly game jam where your goal is to make something playable (and fun!) in just 3 hours.
The theme for this game jam was “Hot and cold”.
This was the second Trijam that I participated with purpose to test my small game engine: Blazorame.
This time I kinda like the game jam theme and I ended up created I simple shooter where you need to use the right gun against each kind of enemy.
The final result is: Heat or Freeze.
You are the ship’s pilot and must survive waves of enemies as much as possible. You have two types of enemies:
If you hit the enemy with the wrong cannon, the enemy will become stronger, greater, and faster.
The game can be played directly on the browser here: https://giacomelli.itch.io/heat-or-freeze.
If you liked the game, you can cast a vote for the game jam here: https://itch.io/jam/trijam-240/rate/2305047.
]]>Trijam is a weekly game jam where your goal is to make something playable (and fun!) in just 3 hours.
The theme for this game jam was “I need a medic!”.
This was the second Trijam that I participated and it had a different purpose for me, because I wanted to test a small game engine that I I’ve been building from scratch, the idea of this game engine is to develop browser-playable games that look like games from Atari 2600 or Gameboy, for instance: it has a limited palette with just 4 colors like the classic Gameboy had. This game engine is entirely built in C# using Blazor. The games built with it are Blazor WebAssembly apps.
The name of this small game engine is Blazorame.
As I didn’t like much this Trijam theme, I ended up just creating a kinda of crossword game, even so This helped to understand the next steps needed to improve the Blazorame.
The game can be played directly on the browser here: https://giacomelli.itch.io/i-need-a-medic-crossword.
If you liked the game, you can cast a vote for the game jam here: https://itch.io/jam/trijam-239/rate/2291731.
]]>Below you can appreciate them:
Abstract: This thesis is describing the development of a framework for graphic user interface testing with usage of Soft Computing algorithms. Development is divided into four phases. The first phase is acquaintance with existing GUI testing frameworks and their analysis. The second phase is about choosing appropriate technologies for development, appropriate algorithms and implementation design. Then there is the framework implementation phase itself and last phase with the testing, result evaluation and improvements proposal. (paper)
Abstract: Increasing the efficiency and reducing the cost of developing microwave integrated circuits (ICs) determines the trend in the development of software modules for automated synthesis of circuit and topological solutions. The article presents the results of the development of a methodology and algorithm that allow for automated synthesis of circuit solutions for integrated microwave amplifiers with distributed amplification (MW URA) based on
set of requirements for its linear characteristics. A feature of the proposed technique is the use of models of active and passive elements for the selected manufacturing technology of microwave ICs. it allows directly in the process of synthesis to obtain circuit solutions of microwave URU suitable for implementation
in a given technological process. The work of the presented CAD technique integrated microwave URU is demonstrated on the example of the development of a pre-amplification cascade for buffer amplifier for the frequency range from 20 to 30 GHz based on 0.25 μm GaAs pHEMT of the technological process of the domestic foundry. (paper)
Abstract: One of the biggest problems of call center employees is fair customer distribution. Every
employee wants to deal equally with customers with similar characteristics and to distribute premiums equally. Because of the large number of parameter and input data for this problem, the search space is large. Therefore, the problem is a complex, in other words NP – hard problem. Genetic Algorithm, which is an heuristical search method, is preferred in order to provide an effective solution to such problems where the solution space is very large. Within
the scope of the study, an application was developed with the help of GeneticSharp library in C# programming language, which can assign fair customers to call center employees by using the Genetic Algorithm method, and the most appropriate solution was sought. It has been determined that the genetic algorithm can provide an effective solution to the assignment problems. (paper)
This tutorial will focus on six new features that can help us write more concise and readable code and how we can use these features on our C# for Unity.
I originally wrote this post as a guest writer for LogRocket’s blog
The following prerequisites are required to follow along with this tutorial:
First, we need to create our Unity project. For this tutorial, we’ll use the version 2021.3.4f1, which, at the moment I’m writing, is the newest LTS Unity version.
On the project templates list, choose 2D(core), give it a name, and click the Create project button.
With the project started, create two folders called Scripts
inside the Assets
folder. We’ll use them to keep our project organized during the tutorial.
For each sample of how to use the new C# feature, we will first look at how it was done before and then how we could write less and more readable code with the new feature.
The classes below are just stubs that are used on all samples throughout the tutorial. You can add them to a script inside the `Scripts folder:
In C# versions 8 and 9, a lot of new features were added to the language. You can read the full features list for each version in the links below:
Unity support for C# 8 has started on version 2020.2 and C# 9 has started on version 2021.2.
Most of these unsupported features are used in very specific scenarios, like extensible calling conventions for unmanaged function pointers, and some aren’t, like indices and ranges.
Because of this, features like indices and ranges and init only setters will likely be supported in future versions of Unity. However, the chance of an unsupported feature for a very specific scenario gaining Unity support in the future is smaller than a feature like indices and ranges
Maybe you can find some workarounds to use these unsupported features in Unity, but I discourage you from doing this because Unity is a cross-platform game engine. A workaround in a new feature could lead you to problems quite hard to understand, debug, and resolve.
Fortunately, Unity supports some of the more common patterns and expressions from C# 8 and 9. Let’s review some of the most helpful ones below and see how they can enable us to write cleaner code.
The switch expression can dramatically simplify and reduce the LOC (Lines Of Code) to make a switch
, because we can avoid a bunch of boilerplate code, like the case and return statements.
Doc tip: the switch expression provides for switch-like semantics in an expression context. It provides a concise syntax when the switch arms produce a value.
Often, a switch statement produces a value in each of its case blocks. Switch expressions enable you to use more concise expression syntax. There are fewer repetitive case and break keywords and fewer curly braces.
The property pattern enables you to match on properties of the object examined in a switch
expression.
As you can see in the sample below, using a property pattern, we can transform a series of if
statements into a simple list of properties that the object on the switch
statement should match.
The _ =>
has the same meaning as the default
on a classic switch
.
Doc tip: a property pattern matches an expression when an expression result is non-null and every nested pattern matches the corresponding property or field of the expression result.
We can use type patterns to check if the runtime type of an expression is compatible with a given type.
The type pattern is almost the same logic as a property pattern but is now used in a context of an object type. We can transform a series of if
statements that check an object type into a list of types that the object on the switch
statement should match.
Using the type pattern, we go from 16 lines of code to only 8 that have the same result and are quite clear to read and understand.
A constant pattern can be used to test if an expression result equals a specified constant.
Probably the simplest pattern match, it just matches a constant value (for instance, a string) and then returns the result.
A constant pattern can be used with any constant expression, like int, float, char, string, bool, and enum
.
A relational pattern will compare an expression result with a constant.
This one could seem the most complex pattern match, but at its core it’s not that complicated. What we can do with a Relational Pattern is directly use logical operators as <, >, <=, or >=
to evaluate the object and then provide a result for the switch
.
Doc tip: the right-hand part of a relational pattern must be a constant expression.
Any of the relational operators <, >, <=, or >=
can be used on a relational pattern.
We can use the not
, and
, and or
pattern combinators to create logical expressions.
This is like an extension of the relational pattern where you can combine the logical operators not, and, and or to create a more complex and elaborate pattern match.
Doc tip: you use the not, and, and or pattern combinators to create the following logical patterns:
- Negation not pattern that matches an expression when the negated pattern doesn’t match the expression
- Conjunctive and pattern that matches an expression when both patterns match the expression
- Disjunctive or pattern that matches an expression when either pattern matches the expression
In this tutorial, we’ve learned how to use the switch expression, property pattern, type pattern, constant pattern, relational pattern, and logical pattern to write less and more modern C# code on Unity.
Hopefully, you can use some of these in your next project to spare yourself time while writing cleaner code.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
Below you can appreciate them:
Abstract: The new generation of steelworks shaped by Industry 4.0 are digitized, networked, flexible and adaptable. Production processes use distributed information and communication structures, are more autonomous and capable to react to dynamic evolutions of the environment. Cyber-physical systems are a fundamental component of Industry 4.0 and enable new generation of smart processes. This paper presents a modular architecture approach for the design of cyber-physical steel production processes. The approach is tested within a production facility for long products such as rails or tubes taking into account the main peculiarities of the sector. The use of an industrial-agent-based solution for enabling intelligent capabilities and interactions among cyber-physical modules is investigated and adopted. Experimental results highlight the industrial applicability of the adopted implementation scheme combining agent-based technology with the proper connection between models, communication and optimisation methods. (paper)
Abstract: Part orientation and support structures are crucial to the quality of metal parts by laser powder bed fusion. Computer-aided solutions for part orientation can be used to support users during the process preparation. In this study, an original computer-aided approach to prepare parts for laser powder bed fusion was formulated and implemented. The proposed method consists of multi-objective optimisation of part orientation and a novel strategy for the simultaneous design of support structures. The automated part orientation optimisation considers both global and local objectives defined by the user. For this purpose, penalty functions measuring the building time, support volume, part distortion, surface roughness and supports contact points are adopted. Unlike in existing methods, the user has the opportunity to define the importance of these aspects in different regions of the part. Such functions are then optimised through a genetic algorithm. The proposed approach was applied to a real product imposing three different sets of objectives. The tested case studies were solved in less than 10 min, providing solutions that satisfied the imposed aims and constraints. Specifically, the results demonstrated that the orientation optimisation can reduce the building time by 68.1% or the material consumption by 66.8%, depending on user requirements. It was also shown how the proposed method can be used to minimise the surface and dimensional error of manufactured parts. The proposed approach allows to manually define the specific design requirements and translate them in terms of manufacturing decisions. This contributes to establishing a fruitful interaction between the user and the developed software during the process design. (paper)
Abstract: A complete approach for the search of top-quality six-bar linkages based on four-bar linkages generating coupler curves with circular regions has been developed. The approach includes a robust algorithm to search for high-accuracy circular regions in coupler curves and a fitness function able to evaluate the quality of the resulting linkage. The approach is used to explore the design space in order to search for optimal dwell linkages and to identify the types of linkages that provide the best dwells according to the established criteria. Then, the global optimum is searched by means of an evolutionary algorithm. The results demonstrate the efficiency and versatility of the approach, that can be customized for specific applications, and the existence of a high number of six-bar linkages with long output dwells. A highly probable global optimum is identified. (paper)
Abstract: This paper presents Genetic-WFC, a procedural level generation algorithm that mixes genetic optimization with Wave Function Collapse, a local adjacency constraints propagation algorithm. We use a synthetic player to evaluate the novelty, safety and complexity of the generated levels. Novelty is maximized when the synthetic player goes on tiles not visited for a long time, safety is related to how far it can see, and complexity evaluates the variability of the surrounding tiles. WFC extracts constraints from example levels, and allows us to perform the genetic search on levels with few local asset placement errors, while using as little level design rules as possible. We show that we are able to rely on WFC while optimizing the results, first by influencing WFC asset selection and then by re-encoding the chosen modules back to our genotype, in order to optimize crossover. We compare the fitness curves and best maps of our method with other approaches. We then visually explore the kind of levels we are able to generate by sampling different values of safety and complexity, giving a glimpse of the variability that our approach is able to reach. (paper)
Abstract: The demand for new deconstruction and demolition approaches is escalating as structures built in 20th century development booms approach their end of life. Rehabilitation and careful deconstruction approaches are increasingly economically and environmentally motivating. For example, in Ontario, Canada multi-decade efforts to decommission nuclear power plants are challenging teams of engineers, researchers, venders, and laborers. In these hazardous scenarios, classical heavy demolition approaches are not an option, and the asset owners find that the costly development of novel workflows and technologies to plan and undergo these deconstruction operations is the only option. These trends present construction researchers with an opportunity to develop technologies and processes to achieve
deconstruction project goals with improved efficiency, certainty, and safety. This paper presents a modular framework for remote human-robot collaboration for waste management in decommissioning and demolition. The proposed framework includes robotic platform reality data capture, scan processing (e.g., segmentation, surface estimation, and recognition), gamified waste packing in virtual reality (VR), and packing plan execution. A comprehensive review of state-of-the-art technologies of each module is explored from the standpoint of applicability to deconstruction and demolition. Then, an autonomous robotic platform for reality data capture is presented. A reconfigurable semi-automated VR platform for waste packing optimization is presented as an example of this process workflow in the context of remote deconstruction and demolition. Finally, the ideas of robotic packing plan execution are discussed as future work. (paper)
With version 3.0.0, GeneticSharp now supports .NET 6.
The performance of many operations has been improved just for use .NET 6:
More about performance: https://github.com/giacomelli/GeneticSharp/wiki/Performance.
To avoid the necessity to write many using
statements to write a basic genetic algorithm with GeneticSharp:
using GeneticSharp.Domain;
using GeneticSharp.Domain.Crossovers;
using GeneticSharp.Domain.Mutations;
using GeneticSharp.Domain.Populations;
using GeneticSharp.Domain.Selections;
using GeneticSharp.Domain.Terminations;
using GeneticSharp.Infrastructure.Framework.Threading;
The namespaces needed to use GeneticSharp have been simplified to just two:
using GeneticSharp;
and this one if you want to use the extensions:
using GeneticSharp.Extensions;
Only GeneticSharp:
install-package GeneticSharp
GeneticSharp and extensions (TSP, AutoConfig, Bitmap equality, Equality equation, Equation solver, Function builder, etc):
install-package GeneticSharp.Extensions
For Unity you should used the version 2.6.0, because Unity, at moment I’m writing, only support .NET Standard 2.1: https://docs.unity3d.com/Manual/dotnetProfileSupport.html.
You should use the UnityNuGet to install GeneticSharp directly from NuGet.
Or you can use the latest GeneticSharp.unitypackage available in the Assets
section of this release.
To install previous version that support .NET Standard 2.0 and .NET Framework 4.6.2:
install-package GeneticSharp -Version 2.6.0
To install previous version that support .NET Framework 3.5:
install-package GeneticSharp -Version 1.2.0
More about installing/setup: https://github.com/giacomelli/GeneticSharp/wiki/Setup.
]]>For this fifth post the chosen one is the State of the Art in Procedural Music Generation using Genetic Algorithms and Rule Based GA Implementation with Attractor Waves
.
The act of composing a piece of music in regarded as a creative process which encompasses objective and subjective rules. Objective rules can be pointed as the minimum necessary ones so that a given composition is considered structurally valid – music theory. Subjective rules depend on the composer ability to make a creation that is pleasant to the majority of the human listeners – or at least an intended specific audience.
Given that the creation process uses the human composer’s experience and sensibility – which in some form is an unknown rule set – it is assumed by some authors that, by analyzing the characteristics of the produced pieces of music spanning from several composers, underlying rules can be inferred.
Genetic algorithms will make the evolution of data encoded in a structure analogous to the chromosome. Traditionally, this piece of information is represented as a binary base, but it is not restricted to 2 symbols, as long as the needed processes for the evolution can be applied.
For the fitness function, we will use music theory as done in many previous papers, penalizing notes that are outside the chosen scale and giving bonus to those that match.
To understand in a first approach, what rating values were most fitting, a first tryout was made by translating the start of some classical pieces into the chromosome output structure. This ideal output was then rated by the fitness algorithm. By obtaining each individual rating component for each tested sample, it was possible to predefine a start set of values to be used. The exception was the attractor wave, which was set subjectively. Using two attraction waves, it is possible to visualize intended effect. In proximity with the wave, the notes fall into its attraction basin. When two waves are near, the effect either oscillates between the two or compromises during their proximity. Overall, many of the generated compositions have a subjective pleasant tone to them.
You can listen a sample music generated by the genetic algorithm:
Rui Luz and Rafael Silva wrote the paper for Instituto Politécnico do Cávado e do Ave, Barcelos, Braga, Portugal.
You can access the full paper directly on project’s Github repository.
]]>Thereupon, I created this post to list those eBooks. If you know any other of them created by Unity’s team let me know in the comments and I’ll update the post.
Last updated on 03/11/2023
Currently listing 59 eBooks
Around 2013, Unity started to support 2D game development with built-in components, like a 2D physics engine, Collider2D, Rigidbody2D, Vector2, Sprite, Tilemap, etc.
In this post, we’ll cover the common properties and behaviors of 2D colliders, which messages are sent to their GameObjects, how we can use them in our scripts, and how each collider setup interacts with others.
I originally wrote this post as a guest writer for LogRocket’s blog
The following prerequisites are required to follow along with this tutorial:
First, we need to create our Unity project. For this tutorial, we’ll use the version 2021.3.4f1, which, at the moment I’m writing, is the newest LTS Unity version.
On the project templates list, choose 2D(core), give it a name, and click the Create project button. We’ll call ours SampleScene
.
With the project started, create two folders called Scripts
and Physic Materials
inside the Assets
folder. We’ll use them to keep our project organized during the tutorial.
Before we start using our Unity project, we need to take a little dive into the basic concepts of colliders.
Colliders are the way that Unity (and most of the available game engines, if not all) manages collisions between GameObjects. For the sake of this tutorial, we are only using the 2D colliders, but a lot of the rules mentioned below are applicable to 3D colliders, too.
In Unity, a 2D collider is a component that allows us to define a shape, where we want to receive notifications in our GameObject’s script whenever another GameObject (with another collider) collides with the first collider.
Unity doc tip: A collider is invisible, and does not need to be the exact same shape as the GameObject’s mesh. A rough approximation of the mesh is often more efficient and indistinguishable in gameplay.
Right now, Unity has eight built-in kinds of 2D colliders:
Every 2D collider in Unity inherits from a common class called Collider2D. As a result, they share common properties. Below is a list of the most notable and widely-used ones:
PhysicsMaterial2D
that can be used by the collider to define things like friction and bouncinessEffector2D
attached to the GameObject.Unity doc tip: Effector2D components are used to direct the forces when colliders come into contact with each other
In our Unity project, add a Sprite (Square)
to the opened scene (SampleScene
):
Select the Sprite (Square)
GameObject and add a component called BoxCollider2D
:
Now repeat the process, but add a Sprite (Circle)
to the scene and add a CircleCollider2D
component to it.
Move the Circle GameObject
a little on top of the Square GameObject
:
If we hit the Play button now, nothing will happen, even if we move the GameObjects inside the editor. No collision will happen.
Why does nothing happen? Well, we need to talk about the Rigidbody2D
component.
A Rigidbody2D is a component used to tell Unity that it should put the GameObject under the control of the physics engine. In other words, GameObjects without a rigidbody do not exist for the physics engine.
If we just add a 2D collider to our GameObject, nothing will happen because the physics engine is not aware of it. If we want the physics engine to control our GameObject, we need to add a Rigidbody2D
component to it.
This means that now our GameObject is affected by gravity via the Gravity Scale
property, and can be controlled from scripts using forces.
We Just need to add a Rigidbody2D
to each of the GameObjects that we’ve already created and hit the Play button:
With the Rigidbody2D
components added to our GameObjects, the physics engine is aware of them and has started making the gravity act.
In our sample, the two GameObjects are just falling, but what if we would like to have the Square GameObject
stay in its position and the Circle GameObject
hit it, before bouncing like a ball?
An easy way to achieve this is using PhysicsMaterial2D
.
Let’s add that ball effect to our sample scene. First, on the Square GameObject, change the property Body Type
of its Rigidbody2D
to Static
:
Create a new PhysicsMaterial2D
, name it Ball physics material
, and place it inside our Physics Materials
folder:
Change its Bounciness
property to 1:
On the Circle GameObject's Rigidbody2D
, change the Material
property to use the Ball physics material
that we’ve just created:
Hit the Play
button again. Now we should see this happen:
When one collider interacts with another collider, Unity sends some messages (e.g., call a method on any MonoBehavior
attached to the GameObject
). In the case of a 2D collider, there are six available messages:
OnCollisionEnter2D
: called in the first frame when the collision startsOnCollisionStay2D
: called in each frame while the collision is happeningOnCollsionExit2D
: called in the first frame when the collision endsOnTriggerEnter2D
: called in the first frame when the collision startsOnTriggerStay2D
: called in each frame while the collision is happeningOnTriggerExit2D
: called in the first frame when the collision endsOnCollisionStay2D
and OnTriggerStay2D
are called each frame until they reach Time To Sleep (if the GameObject is not moving anymore).
We can change this setting on Project Settings / Physics 2D
It’s time to write some code. Let’s create a script to log every time that our Circle GameObject
hits the Square GameObject
(OnCollisionEnter2D
) to the console window. We’ll record how many frames they stay in contact (OnCollisionStay2D
), and when they stop hitting each other (OnCollsionExit2D
). We’ll also show what happens with and without a trigger
Inside our Scripts
folder, create a script called CollisionLogger
and add it to the Circle GameObject
:
Open the CollisionLogger
script on editor and type these methods:
Hit the Play
button, and we should see something like this in the Console window:
As we can see, OnCollisionEnter2D
is called when the Circle GameObject
hits the Square GameObject
. OnCollsionExit2D
is called when they aren’t hitting each other anymore, and OnCollisionStay2D
has not been called because the two GameObjects are not keeping contact. To see OnCollisionStay2D
being sent, just remove the Ball physics material
from the Circle GameObject's Rigidbody2D
:
Hit the Play
button again and the output in our Console window should be:
Now we have one OnCollisionEnter2D
and a lot of OnCollisionStay2D
that will be called until the two GameObjects keep the contact or until Time To Sleep is reached.
Now, re-enable the Ball physics material
on the Circle GameObject's Rigidbody2D
and add the CollisionLogger
to the Square GameObject
too.
Hit Play
, and the Console window should look like this:
As expected, the messages are called in all GameObjects involved in the collision.
What about the OnTrigger
method? Right now, none of our colliders are marked as a trigger (IsTrigger
), which is why only OnCollision
has been called.
Triggers are useful when we need to detect that a given GameObject has reached a point or another GameObject. Let’s say we want to be notified on Square GameObject
every time that the Circle GameObject
passes through it. We can add a collider with IsTrigger
checked on it and we will receive the notification when the physics engine calls the OnTriggerEnter2D
method.
To see a trigger in action, mark the Square GameObject's collider
as a trigger:
Play the scene, and we’ll see that Circle GameObject
is passing through the Square GameObject
:
This happens because the Square GameObject
is a trigger now. The physics engine won’t generate the expected behavior when two objects collide, but will instead send the OnTrigger
methods to the involved GameObjects.
If we look at the Console window, you’ll notice that it is empty because no OnCollision
methods are called. To log the OnTrigger
methods, open our CollisionLogger
script and add these new methods:
Run the scene and we can see this log in the Console window:
Now only OnTrigger
methods are called because there is a trigger involved in the collision.
An important thing to note is that all OnCollision
and OnTrigger
methods receive the parameter Collision2D/Collider2D
. This parameter can hold information about the collision itself, such as:
If we don’t need to use this information in our script, we can declare the OnCollision/OnTrigger
methods without the logging parameter.
You probably noticed in the last section that there is parity between the OnCollision
and OnTrigger
methods the physics engine calls on the GameObjects involved in the collision. Knowing when each kind of interaction raises each kind of message/method in the involved GameObjects can be a little tricky; despite this apparent similarity, there are some rules for interaction possibilities between different collider setups.
There are six different setups a collider can have that will differently affect how it interacts with other colliders. These setups can be done by a combination of the properties IsTrigger
of the Collider2D
and the property Body Type
of the Rigidbody2D
attached to the same GameObject. Below is a list of similar setups:
The answer to this question is in the table below, which I took from the Unity documentation site. It shows us where the collisions happen and when the collision callbacks (the OnCollision
and OnTrigger
methods) are called.
Source: Unity docs
Looking at the table, we can figure out things like:
To help us to better understand the colliders’ interactions, now and when we face the kind of problem mentioned above, I created a sample where we can move the GameObjects through each of the six possible interaction setups, see how they interact with each other, and what callbacks messages are sent.
Try it and move each collider setup to better understand what messages are sent for each interaction
For the last part of this tutorial, I would like to mention the Physics2D settings.
These settings are not within the scope of this tutorial — talking about some of them could fill another whole tutorial — but I think it’s important to know that they exist, what their default values are, and that we can adjust them to the needs of our project.
You can access these settings via Project Settings > Physics 2D.
Settings like Gravity
are pretty straightforward, but things like Velocity Iterations
and Position Iterations
can be a little obscure and can affect game behaviors a lot.
Most of these settings are changed when we need to achieve some kind of non-conventional physics behavior or performance improvement, but you should be aware that you’ll need to retest gameplay after each change you make to these settings to ensure that you haven’t broken anything.
The bottom line is: make sure to only change these settings after studying and understanding their impact.
In this tutorial, we’ve explained the fundamentals of Unity 2D Colliders: what they are, what their common properties and behaviors are, how to add a BoxCollider2D
and CircleCollider2D
to a GameObject, what a Rigidbody2D
is and how to use PhysicsMaterial2D
, what the collision callbacks are and how to use them on our scripts, and, finally, the kinds of collider setups and how they interact.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
The sprites used on the WebGL sample are from Kenney.
]]>You can download it for Windows, MacOS, Linux or play directly on browser from its itch.io page: https://giacomelli.itch.io/no-more-room-in-hell.
A 2D top-down survival shooter game where you will be a rookie soldier in a secret government lab when a zombie apocalypse starts out of blue and now you need to find your way to the docks for a chance to be with your wife and son again.
I started this side project game development in early 2021, but stopped and restarted again now and then while doing other things. The whole programming and Unity game development was made only by me.
I took a lot of features I wanted in this type of game and implemented them, like inventory management, lots of weapons, missions, and a backstory that you can try to discover as you play the game.
There are 7 weapons available: gun, double gun, shotgun, grenade launcher, mine launcher, ice caster and mine. Well, you can also drive a car and kill a lot of zombies while driving, so maybe there are 8 weapons available.
Another cool thing I developed for the game is that a lot of the environment can be blown up and destroyed, I used my destructible sprite library for that.
I organized the game as a series, so the first 4 levels available are like the 4 episodes of season 1.
Almost all sprites used in the game are from Kenney.
The incredible soundtracks used in the game were created by Daniel HDR, a comic book artist, working on projects for DC Comics (Green Lantern Corps, Superman, Nightwing, Legion of Super-Heroes), Marvel Comics (Avengers, X-Men Forever, Falcon and Winter Soldier) and Dynamite Entertainment (The Shadow, Red Sonja, Kiss) Winner of the Dragon Awards as 2020’s “BEST GRAPHIC NOVEL” with Battlestar Galactica: Counterstrike.
Daniel HDR also has a sound side project called D3FCON1. I had discovered D3FCON1 during the game’s development and thought that some of their music matched the mood of the game perfectly.
Daniel was very kind to give me permission to use the music in the game, as well as providing an art to be used as the cover of the game (as you can see at the top of this post)
The sound fx are from Peter Wayne and his amazing sound effects packages on Unity Asset Store, called Pro Sound Collection
If you have tried to change the current playback position (currentTime) of large audio files using the HTML5 audio tag, even using the preload attribute, you probably struggled with the fact that the audios would never play at that position but would start playing again from the beginning of the audio.
This happens because most browsers, Chrome included, request that the headers accept-ranges
and content-range
should be in the audio file response.
If you are using ASP .NET, the easiest way to include those headers in the response is to use the third parameter of the Controller's File
method:
Then, the result response will have the needed headers that make browsers happy and allow you to change the audio’s playback position.
Trijam is a weekly game jam where your goal is to make something playable (and fun!) in just 3 hours.
The theme for this game jam was “There is a twist…!”.
When you have just 3 hours to make a game, even a simple one, you will need to thinking about the idea and plan a lot before start. Besides, the theme “There is a twist…!” was a challenge one, at least to me. I took some time to realize a idea to develop.
The final result is an Atari 2600 kind of game: Who Kills Who?
The game can be played directly on the browser here: https://giacomelli.itch.io/who-kills-who.
If you liked the game, you can cast a vote for the game jam here: https://itch.io/jam/trijam-154/rate/1376314.
]]>Don’t let the cute look or super fun animations fool you, Puzzimals is a cube-matching game challenging you to match identical tiles in 100 levels of thought-provoking puzzles in a world map.
Packed with features and creative puzzles you can combine cubes of ice, wild cards, bombs and so much more!
Each level unlocks and increases in difficulty and offers an almost infinite number of combos.
Fans of casual puzzle games will enjoy the crisp, colorful graphics featuring cute animals and the compelling music offers great relaxation.
]]>If you already worked on a project where the data access solution has used SQL commands, instead of some ORM solution, you probably saw some SQL commands inserted directly inside the C# code. I worked on projects with that approach too, but I saw a great problem with that: We were treating a complete language (SQL) as just a string inside our C# code. No IntelliSense, code completion, syntax validation, format, etc. If
Some time ago, with that problem in mind, I created a small library to help us to use SQL commands in our project at that time. As we were using Dapper as our facilitator to access the SQL Server database, we needed to write SQL commands as our daily base activities.
That library, called SqlAsFile, helped us to write SQL commands in separated .sql files and use them inside the C# as typed string property of a static class (using T4 generator). This allowed us to write SQL with the whole help of IDE and with no chance to reference an invalid SQL file path.
SqlInfoGenerator.tt
every time you add or remove .sql file from the project. You don’t need to that when you just change the content of already existing .sql file. (right click, Run Custom Tool)Now you can access the content of your .sql files in a strongly typed way directly from your C# code:
You can use some tags to tell to SqlAsFile’s parser how to treat a portion of the file:
Anything inside of these tags will be strip off of the SQL that you can access on C#.
This tag is useful when you want to test the .sql directly against the DB without need to define the arguments every time.
Use this tag if you want to read the CTE of your file in the Cte property on C#.
]]>The Unity 2020.2 was official release on December 15, 2020 with a lot of fixes, API changes, changes, improvements and features. You can find the new features directly on the Unity manual accessing this search: newin20202. To read all items, access the complete release notes.
828
128
86
261
66
Below are the features that most caught my attention and that I believe can positively impact my development workflow.
Arrays and Lists are now reorderable in the Inspector, and you can use the attribute [NonReorderable] to disable this function if you prefer
Root Namespace is now available as a new field in the asmdef inspector. The Root Namespace will be used to add the namespace when creating scripts in Unity and in Visual Studio and Rider.
Supports all the newest C# 8 features and enhancements, except for default interface methods.
You can learn more about C# 8 features in this posts.
The compilation pipeline now supports Roslyn analyzers. This enables you to run C# code analyzers asynchronously in the background inside the Unity Editor without interrupting your iteration workflow. You can also run them synchronously from the command line.
Roslyn analyzers and ruleset files in Unity projects are powerful tools to help inspect your code for style, quality, and other issues. You can use existing analyzer libraries to inspect your code and write your own analyzers to promote the best practices or conventions within your organization.
Unity Quick Search is a handy package that enables you to search for anything in Unity. Quick Search 2.0 is now available and comes with more search tokens and the ability to provide contextual completion when typing queries. Scene searching is no longer limited to just the open Scene, but instead, it’s now possible to search through all the Scenes and Prefabs of your project at once.
Enables the Roslyn Reference Assemblies option by default when compiling C# scripts in the Editor, avoiding unnecessary recompiling of asmdef references. If you make changes that don’t involve code – for example, to materials, shaders or prefabs – the IL2CPP conversion from .NET assemblies to C++ will now be skipped entirely when building a player.
These were the features that brought my attention to Unity 2020.2.
What were the features that caught your attention?
Project / Save As Template
, Screenshots
, etc.
Unity Editor internal menus
To enable those menus, go to Help / About Unity
, then type internal
, after that, you will see some new menus available on Unity Editor:
To disable the menus, go to Help / About Unity
, then type internal
again.
Indices provide a succinct syntax for accessing single elements in array/collection.
Consider the array below:
]]>Note that sequence[^0] does throw an exception, just as sequence[sequence.Length] does. For any number n, the index ^n is the same as sequence.Length - n.
Some algorithms depend on multiple inputs. Tuple patterns allow you to switch based on multiple values expressed as a tuple.
The tuples provides concise syntax to group multiple data elements in a lightweight data structure.
In this sample MonoBehaviour, we will get the message to indicate the winner of the game rock, paper, scissors.
]]>Property Patterns
enables you to match on properties of the object examined in a Switch Expressions.
The switch expression provides for switch-like semantics in an expression context. It provides a concise syntax when the switch arms produce a value.
In this sample, we will calculate the damage that an NPC can cause taking into account if it is an enemy, and if it is armed.
]]>Switch Expressions
can dramatically simplify and reduce the LOC (Lines Of Code) to make a switch
.
The switch expression provides for switch-like semantics in an expression context. It provides a concise syntax when the switch arms produce a value.
Often, a switch statement produces a value in each of its case blocks. Switch expressions enable you to use more concise expression syntax. There are fewer repetitive case and break keywords, and fewer curly braces.
In the code below we have a classic switch
statement:
And we can reduce 7 lines of code
of this simple switch statement using a switch expression
:
If you are using Visual Studio, there is a Quick action
(CTRL + .) to convert from a classic switch
to a switch expression
:
In this post, I will talk about how I put a CHIP-8 emulator to run inside the Unity Editor Inspector Window.
To help to understand everything in this post I recommend you read the previous ARC-8 devlog posts, in special the last one.
The source code is not yet published on GitHub, I will notify in this series of posts about ARC-8 devlog and on my Twitter too when this happens.
I wanted to run the graphics of the emulator inside the Unity Editor Inspector Window. Maybe, you can ask “why?”, well there is an inside joke among Unity developers where we try to play not expected things inside the inspector window, as you can see in the links below:
This is a simplified version of the Editor script for Chip8Loader.
In the method StartEmulator
we start the emulation inside the Inspector Window.
After starting the emulator by calling the Run
method of Chip8Loader
passing our local EditorChip8Input
to override the emulator input to always use the keyboard.
We use the EditorCoroutineUtility.StartCoroutine
method from the Editor Coroutines package to start two coroutines for update and render emulator inside the editor.
The StopEmulator
stops the emulation inside the Inspector Window using the EditorCoroutineUtility.StopCoroutine
.
In the method UpdateEmulator
we call LateUpdate from Chip8Loader
to run a emulation cycle.
The RenderEmulator
calls the Repaint from Editor to redraw the inspector using the OnInspectorGUI.
When the emulation is not running this method just draws the Chip8Loader
inspector using the DrawDefaultInspector.
When we click on the button Test emulator
on the inspector, the emulation is activated and the emulation will be rendered on the inspector.
When we use the GUI.BeginClip
everything that we draw using the GL (Low-level graphics library) will be rendered inside the area passed as an argument to BeginClip. This is why we call Graphic.Render
between the BeginClip and EndClip.
In the last step, we read the input using input.ReadKeys
.
In the next ARC-8 devlog post I will probably talk about the release of the ARC-8 as an open-source project.
If you have any doubts about what I talk about above or any tip about the CHIP-8 emulator (or Unity) and you like to share it, please let me know in the comments section below.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
For this second post, the chosen one is the Graphy by Martín Pane that I used in a lot of my Unity projects.
Graphy is:
a feature packed FPS Counter, stats monitor and debugger
My main use for it was to monitor the FPS of my mobile games, but Graphy has many other features:
With this tool you will be able to visualize and catch when your game has some unexpected hiccup or stutter, and act accordingly!
The debugger allows you to set one or more conditions, that if met will have the consequences you desire, such as taking a screenshot, pausing the editor, printing a message to the console and more! Even call a method from your own code if you want!
You can follow the instructions on the plugin GitHub repository: https://github.com/Tayx94/graphy to install it:
Just drag the [Graphy]
prefab from folder Assets/Taxy/Graphy - Ultimate Stats Monitor
to your scene and Play the scene to see it in action.
In this post, I will talk about how I implemented the graphics, sound, input, and log systems for Unity 3D.
You can read the other ARC-8’s devlog posts.
The source code is not yet published on GitHub, I will notify in this series of posts about ARC-8 devlog and on my Twitter too when this happens.
If you were out of the planet in the last decade, maybe you don’t know what is Unity: Unity is a cross-platform game engine developed in C++, but the games made on it are developed using .NET and C#.
Unity is a cross-platform game engine developed by Unity Technologies, first announced and released in June 2005 at Apple Inc.’s Worldwide Developers Conference as a Mac OS X-exclusive game engine. As of 2018, the engine had been extended to support more than 25 platforms. The engine can be used to create three-dimensional, two-dimensional, virtual reality, and augmented reality games, as well as simulations and other experiences. The engine has been adopted by industries outside video gaming, such as film, automotive, architecture, engineering and construction.
In older versions, Unity only supported a subset of .NET Framework (4.x), but nowadays it’s supporting .NET Standard 2.0.
This is why we can use our ARC-8 Core
, mentioned in the first devlog, because it is a .NET Standard class library
and can run directly on Unity.
The 3D models of the arcade cabinet and arcade room were created by my talented friend Giusepe Casagrande.
The system interfaces IGraphic
, ISound
, IInput
, and ILog
will be implemented as MonoBehaviour.
The MonoBehaviour class is the base class from which every Unity script derives, by default. TPovides the framework which allows you to attach your script to a GameObject in the editor, as well as providing hooks into useful Events such as Start and Update.
This is the IGraphic
implementation.
In the method Initialize
we verify if will render to the main camera or to a specific target camera, then we set some camera configuration, create the material we will use to render the CHIP-8 graphic, get the screen size, then we initialize our Double Buffer array.
In most cases, we don’t use the main camera, but instead, use a target camera that uses a RenderTexture, and then we can use that texture on any surface on our game, like a TV screen or an arcade cabinet.
Render Textures are special types of Textures that are created and updated at run time. To use them, you first create a new Render Texture and designate one of your Cameras to render into it. Then you can use the Render Texture in a Material just like a regular Texture.
This is one of the two methods that needed to be implemented of IGraphic
interface. We received the array (64x32) of bytes representing the current state of CHIP-8 graphics and just update our local array variable _gfx
.
The method OnRenderObject is called after the camera has rendered the scene.
OnRenderObject can be used to render your own objects using Graphics.DrawMeshNow or other functions. This function is similar to OnPostRender, except OnRenderObject is called on any object that has a script with the function; no matter if it’s attached to a Camera or not.
The method Render
will be called by the OnRenderObject
method.
This method is used to draw the state of CHIP-8 graphics (_gfx
array) to the current camera using the GL (Low-level graphics library).
Use GL class to manipulate active transformation matrices, issue rendering commands similar to OpenGL’s immediate mode and do other low-level graphics tasks. GL immediate drawing functions use whatever is the “current material” set up right now (see Material.SetPass). The material controls how the rendering is done (blending, textures, etc.)
We use a second array called _buffer
to implement a Double Buffer and reduce the screen flickering
.
A byte with value 1 should be drawn (foreground color) and a byte with value 0 should not be drawn (background color).
This is the second of the two methods needed to be implemented of IGraphic
interface, but as we implemented a Double Buffer, this method does not need to perform any operation.
It just set the RenderTexture
of the current target camera.
This is the ISound
implementation.
We just try to locate our AudioSource component that will be used to play the sound.
This is the only method we need to implement of ISound
interface and it just calls the AudioSource’s PlayOneShot using the AudioClip defined on _beep
field.
This is the IInput
implementation for the keyboard.
First, we create the dictionary _map
to map the real keyboard keys to CHIP-8 keypad keys.
The only method we need to implement for IInput
interface.
In this method, we set to 1
the CHIP-8’s keypad keys that were pressed by the player using the Input.GetKey method.
This is the ILog
implementation.
The two methods implemented for ILog
interface use methods available on Unity Debug class to send log messages to the console window.
This is a simplified version of the component responsible to load all systems (IGraphic, ISound, IInput, and ILog), initialize the Chip8
class emulator, and load the ROM.
Verifies if all the systems needed to run the emulator were configured in the editor, then sets the desired FPS and starts to run the emulator.
These two methods have some overloads, but in the end, they will create a new instance of the Arc8.Chip8
class using the systems defined and will load the ROM.
We use the LateUpdate method from MonoBehaviour
to run the emulator EmulateCycle
method.
LateUpdate is called every frame after all Update functions have been called.
In the next ARC-8 devlog I will talk about how I put a CHIP-8 emulator to run inside the Unity editor inspector window.
If you have any doubts about what I talk about above or any tip about the CHIP-8 emulator (or Unity) and you like to share it, please let me know in the comments section below.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
I created a gist, called Scene Selection Toolbar
, that uses the Unity Toolbar Extender and allows us to have a dropdown with the most used scenes right on the side of the Play
button.
Unity Toolbar Extender
as described in the Import section on GitHub.Editor
folder of your project or use the Gist Importer.+
button to add it to the scenes dropdown.-
button to remove the current scene from the list.In this post, I will talk about how I implemented the graphics, sound, input, and log systems for Blazor.
You can read the other ARC-8’s devlog posts.
The source code is not yet published on GitHub, I will notify in this series of posts about ARC-8 devlog and on my Twitter too when this happens.
Blazor is a feature of ASP.NET that extends the .NET developer platform with tools and libraries for building web apps.
Blazor can run your client-side C# code directly in the browser, using WebAssembly. Because it’s real .NET running on WebAssembly, you can re-use code and libraries from server-side parts of your application.
This is why we can use our ARC-8 Core
, mentioned in the previous devlog, because it is a .NET Standard class library
and can run directly on Blazor web assembly.
For some components, like menu, inputs, and buttons I use the Blazorise library.
You can test and play the CHIP-8’s games directly on your browser with our online demo: ARC-8 Blazor Online Demo.
The system interfaces IGraphic
, ISound
, IInput
, and ILog
will be implemented as Blazor components.
A component is a self-contained chunk of user interface (UI), such as a page, dialog, or form. A component includes HTML markup and the processing logic required to inject data or respond to UI events. Components are flexible and lightweight. They can be nested, reused, and shared among projects.
This a simplified version of IGraphic
implementation (without of color selector that you can see in the demo).
In the method OnAfterRenderAsync
we verify if it’s the component’s first render, then we call a JS method that will initialize a JS helper for Chip8Graphic.razor
that will return the size of the canvas to C# code, then we use this information to scale our 64 x 32 CHIP-8’s display.
OnAfterRenderAsync and OnAfterRender are called after a component has finished rendering. Element and component references are populated at this point. Use this stage to perform additional initialization steps using the rendered content, such as activating third-party JavaScript libraries that operate on the rendered DOM elements.
This is one of the two methods needed to be implemented of IGraphic
interface. We received the array (64x32) of bytes representing the current state of CHIP-8 graphics and just update our local array variable _gfx
.
The method RenderAsync
will be called by the Chip8Loader
component during the Game Loop.
This method is used to draw the state of CHIP-8 graphics (_gfx array) to the HTML page.
We use a second array called _buffer
to implement a Double Buffer and reduce the screen flickering
.
A byte with value 1 should be drawn (foreground color) and a byte with value 0 should not be drawn (background color).
This is the second of the two methods needed to be implemented of IGraphic
interface, but as we implemented a Double Buffer, this method does not need to perform any operation.
Called by the JS side every time that the user resizes the browser window.
We use this one to invalidate our _buffer
and reset the canvas to the background color.
This a simplified version of ISound
implementation.
We just use the information from NavigationManager
to set the audio file we want to play.
This is the only method we need to implement of ISound
interface and it just calls a JS method that will get the audio
tag on the component, set the AudioSource
, then load, and play it.
This is the IInput
implementation.
First we create the dictionary _map
to map the real keyboard keys to CHIP-8 keypad keys.
The second dictionary we create is _keyDown
. It will be used to map what keys the player is pressing.
We call the JS method chip8Input.init
that will add two event listeners, one for keydown
and the other for keyup
that will call the C# methods HandleKeyDown
and HandleKeyUp
.
The only method we need to implement for IInput
interface.
In this method, we set to 1
the CHIP-8’s keypad keys that were pressed by the player.
This method is responsible to set the _keyDown
dictionary by the keyboard keys that the player pressed.
This is a simplified version of the ILog
implementation.
The two methods implemented for ILog
interface use the Microsoft.Extensions.Logging.ILogger<T>
to send log messages to the browser console.
This is a simplified version of the component responsible to load all systems (IGraphic, ISound, IInput, and ILog), initialize the Chip8
class emulator, load the ROM and perform the game loop.
Initializes the emulator with the systems, then calls the JS chip8Loader.init
function that will use the browser window.requestAnimationFrame to call the C# method RunCycle
.
The window.requestAnimationFrame() method tells the browser that you wish to perform an animation and requests that the browser calls a specified function to update an animation before the next repaint.
This method is called by the JS (window.requestAnimationFrame
).
We implement a Game Loop and in the end, call the Chip8Graphic.RenderAsync
.
In the next ARC-8 devlog I will talk about the ARC-8’s implementation on Unity3D.
If you have any doubts about what I talk about above or any tip about the CHIP-8 emulator (or Blazor) and you like to share it, please let me know in the comments section below.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
In this post, I will talk about my premises for the ARC-8’s code design that guided me during its development.
You can read the other ARC-8’s devlog posts.
The source code is not yet published on GitHub, I will notify in this series of posts about ARC-8 devlog and on my Twitter too when this happens.
I always wanted to develop a video game emulator. Anybody that tried or even google about it quickly realizes that it is not an easy or simple task. If you search a little further you will see developers talking about CHIP-8 as the best thing to emulate as your first emulator project. Why? Well, the CHIP-8 is quite a simple virtual machine that has only 35 opcodes, simple graphic, sound, and input systems too.
According to Wikipedia:
CHIP-8 is an interpreted programming language, developed by Joseph Weisbecker. It was initially used on the COSMAC VIP and Telmac 1800 8-bit microcomputers in the mid-1970s. CHIP-8 programs are run on a CHIP-8 virtual machine. It was made to allow video games to be more easily programmed for these computers.
According to Mastering CHIP-8 by Matthew Mikolay (one of the best technical information source about CHIP-8):
CHIP-8 is an interpreted minimalist programming language that was designed by Joseph Weisbecker in the 1970s for use on the RCA COSMAC VIP computer. Due to its hexadecimal format, it was best suited to machines with a scarcity of memory, as minimal text processing had to be performed by the interpreter before a program could be executed. This property inevitably led to its implementation on a variety of hobbyist computers aside from the VIP, such as the COSMAC ELF, Telmac 1800, and ETI 660.
I won’t get into details about how to implement each of CHIP-8’s 35 opcodes, because there are plenty of tutorials and resources talking about this on the internet (look in the section ‘Further reading’ in the of this post). My focus here is to explain my code design decisions to allow built a CHIP-8 emulator core in .NET Standard that will allow us to develop a graphic, sound, and input systems for Blazor and Unity3D (and any other platform where C# / .NET is supported).
Why the name ARC-8? well, the choice of the name was quite chaotic as any of the name ideas brainstorm that I have had with my friend Giusepe Casagrande. The name’s meaning and the way of speaking are
Arcade
because we want to remember that old day when we are kids playing some classic games in an arcade or fliperama as we call it in Brazil.
First of all, before started do develop the code, I needed to sit down and defined some premises of the ARC-8’s code design, they are:
To make the ARC-8 core solution cross-platform, the first decision is to implement it as .NET Standard class library
. The second one is to define the graphic, sound, input, and logging systems of the emulator as interfaces that will only be implemented on specific platforms, like Blazor and Unity3d.
The main part of a CHIP-8 emulation development is on the opcodes.
The common solution for many emulators is to put all opcodes inside a giant switch
statement (some use nested switch to group some opcodes).
There is no problem in this kind of solution, but besides the code readability that can easily suffer from too many cases in the switch statement, there is a problem with the idea of unit testing, because we cannot test each opcode isolated.
Sure, you can still unit testing an emulator with opcodes in a switch
statement, but I decided to define an interface that an opcode needs to implement to be used on the emulator.
With the decision of implement each opcode in a separate classes and the decision of use interfaces for each system, we can easly unit test them and achieve the 100% code coverage:
To validate the code coverage I used the coverlet.msbuild and ReportGenerator Nuget packages combined with Cake’s recipe to generate a code coverage report:
I like to benchmark the code to compare some solutions in a fast and precise way. To create a benchmark in the .NET system the BenchmarkDotNet is the right choice. BenchmarkDotNet is quite easy to use and you can set up a project with it in less than 5 minutes.
When I used it on ARC-8 implementation I discover that I could improve the performance in 18x just removing a Linq code and using a cached solution to locate the opcodes.
In the next ARC-8 devlog I will talk about the ARC-8’s implementation on Blazor.
If you have any doubts about what I talk about above or any tip about the CHIP-8 emulator and you like to share it, please let me know in the comments section below.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
I think it’s important to any pro programmer in Unity quite understand those points, because of this I list them below, in a brief way. You can (and should) read the full documentation to understand all the 8 points listed in more depth.
Read the full documentation: https://docs.unity3d.com/Manual/overview-of-dot-net-in-unity.html.
For this first post, the chosen one is the Unity Native Share Plugin by Süleyman Yasir KULA that I used on my latest mobile game Puzzimals.
Unity Native Share Plugin is:
A Unity plugin to natively share files (images, videos, documents, etc.) and/or plain text on Android & iOS.
When you use this plugin on the Android and iOS project you will be able to invoke the native share of the target operating system. For example, here is the result when I used it on Android:
You can follow the instructions on the plugin GitHub repository: https://github.com/yasirkula/UnityNativeShare to install it:
The plugin is quite simple to be used, there is an example code on its repository that show how you can use it with few lines of code.
Some time ago I made the gist below that uses the Unity Native Share Plugin
to add a social share component to any game object:
Unity has recently launch the Unity Game Growth Program an accelerator program for free-to-play indie iOS/Android games made with Unity.
Game Growth is a new game accelerator for mobile indie developers. We partner with indie game devs so they can quickly and effectively scale their games while remaining 100% independent. Becoming a partner gives you access to user acquisition funding alongside industry-leading tools and experts in game operations. The best part? You keep full ownership of your studio and intellectual property.
If you meet the criteria and we accept you into the program, Unity will fund user acquisition for your game and provide the technology and Unity experts to help manage player engagement and monetization. We take care of the process that helps grow your game while you concentrate on development.
Game Growth is a revenue-sharing program – Unity and the developer team split the revenue from advertising and in-app purchases 50/50 after the user acquisition spend has been recouped. Put simply, Unity covers the cost of bringing in new players while we both share in the reward.
You can read more about the program here:
According to documentation, the Unity Game Growth Program works through 4 steps:
Getting started is easy. All you need is a published free-to-play mobile game made with Unity. Begin your application by submitting your project, advertising assets, and installing the Game Growth package.
This stage determines if your game is a good candidate for the program. We take a look at your core game performance, project details and overall program fit. If everything looks good, Unity covers the cost of a full user acquisition test.
If you become a partner, we work with you to design features, integrate business services, and optimize player engagement and monetization. You will also have access to a dedicated game ops team that works with you every step of the way.
Game Growth gives you access to the user acquisition funding and resources to take your game to the next level - with a 50/50 revenue sharing model. We acquire the right players then effectively manage those lifecycles and provide guidance for monetization.
There are 7 requirements that your game must meet to apply to the program:
Made with Unity
: game must be made with Unity 2018.4 or later.Mobile Free-to Play
: game must be free-to-play on iOS or Android.Published Games
: game must be currently live and published on the Google Play or Apple App Store.Connected to Unity Dashboard
: game must be connected to the Unity Dashboard and have a valid project ID.Age Restrictions
: not accepting games aimed at audiences under 13 years of age.Advertising Assets
: submit images and videos for use in Unity ads.Language
: game must support English.More details at: Unity Dashboard / Game Growth.
Is not the intent of this post cover everything about how to get through the Unity Game Growth Program, I guess this quite well documented on Unity Game Growth Program page.
The idea of this post is focused on the first of the four-step: Submit Game & Integrate Package
.
Please, if you did not read the official documentation yet, go to Game Growth Program page, click in the Apply
button and follow the instructions
After you finish the 6 - Confirm
come back here. You’ll better understand this post and tips & tricks after that.
The Submit Game & Integrate Package
step is divided into 3 sub-steps:
Download the Game Growth package and integrate it into your project. Once installed, refer to the package documentation for next steps.
You need to download the package through the Download Package
, then in Unity, open the Package Manager
window install it from the tarball
option.
if you are using Unity version previous 2019.4, you need to use the option Add package from disk
After installing the package, if your using UnityEngine.Purchasing
in your project and you use Assembly Definitions too you can see a lot of errors in the console about it, like error CS0246: The type or namespace name 'IStoreController' could not be found (are you missing a using directive or an assembly reference?)
To fix it, just reference again the UnityEngine.Purchasing
on the Assembly Definition references
:
Now, open the menu Game Growth / Easy onboarding
and follow the steps.
After the package has been installed and configured for sandbox, run your game on the devices . To confirm integration, make sure you remove the game from your device, reinstall, and then launch the game.
Build, install, and run the game on your target device (iOS / Android).
Then go to Unity Dashboard and try to confirm the Run & Validate integration
, this can take minutes or even hours to allow you to confirm successfully.
Running the shell ./adb logcat -s Unity PackageManager dalvikvm DEBUG
while running the game on an Android device I saw errors like this on the game log:
UriFormatException: Invalid URI: The hostname could not be parsed
at DeltaDNA.Network+<SendRequest>
[DDSDK] [WARNING] Event upload failed - try again later
I found out that the collect_url
and engagement_url
in the /Assets/DeltaDNA/Resources/ddna_configuration.xml
file were empty and this was an error cause.
To fix it, I had to go to the Game Growth / Configuration
menu, then click on the Configure SDKs
button, so collect_url
and engagement_url
were correctly configured in the file.
The final step is to publish your project on the Apple App Store and Google Play Store. Don’t forget to switch the package environment dropdown from Sandbox to Store in GGLauncher prefab. Run your game on the devices.
Build and publish the game to the store (App Store or Google Play), download/update the game and run it on your device.
Then go to Unity Dashboard again and try to confirm the Publish & Validate integration
, this can take minutes or even hours to allow you to confirm successfully.
I hope that these tips and tricks that I learned while applying my game Puzzimals to the Unity Game Growth Program can be useful to you too.
If you have any other tips & tricks you use on the Unity Game Growth Program process and you like to share it, please let me know in the comments section below.
]]>OWASP ZAP (short for Zed Attack Proxy) is an open-source web application security scanner. It is intended to be used by both those new to application security as well as professional penetration testers.
As the Path Traversal alert documentation explains:
A path traversal attack (also known as directory traversal) aims to access files and directories that are stored outside the web root folder. By manipulating variables that reference files with “dot-dot-slash (../)” sequences and its variations or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system including application source code or configuration and critical system files.
The most common way to use this kind of attack in ASP .NET applications is trying to download some configuration files, like the web.config
file, from the server file system. By default, the IIS handlers will not allow download this kind of file.
It is good practice (mandatory in fact) to validate user input, especially on routes/actions where there is some kind of access to the file system.
OWASP ZAP docs say:
Validate the user’s input by only accepting known good – do not sanitize the data
An easy way to perform a basic user input validation is to use the ModelState.IsValid, this property will always be false if any data sent by the client has an invalid or unexpected value. This is a good way to only accepted the expected type for model properties.
Model state represents errors that come from two subsystems: model binding and model validation. Errors that originate from model binding are generally data conversion errors. For example, an “x” is entered in an integer field. Model validation occurs after model binding and reports errors where data doesn’t conform to business rules. For example, a 0 is entered in a field that expects a rating between 1 and 5.
A way that you can extend and either improve the validation is using the Validation attributes, like CreditCard, Compare, EmailAddress, Phone, Range, RegularExpression, Required, StringLength, Url, and Remote,
You can even use a custom ActionFilterAttribute
to validate all your action models like the MS official documentation suggest: Model Validation in ASP.NET Web API.
And use it on WebApiConfig:
Does not matter if you will use Validation attributes or another validation way, the important thing here is you should always validate the user input before use it
.
If you have any action on your API that lets the user define a file system path, be sure to validate if it not passing strings like ../
or ..\
. If you not validate inputs like this, you can allow an attacker to navigate on your app file system, even the host machine file system.
OWASP ZAP docs say:
Ensure the user cannot supply all parts of the path – surround it with your path code
OWASP ZAP can report some false positives, especially for routes that have an argument with the same name of the action: https://localhost:8080/api/tests/test1?kind=test1
This happens because OWASP ZAP tries to use action name in all arguments in an attempt to see if can access a different resource or file.
In my case, all reports like this were false positives.
You can change the risk alert to False Positive
for each URL in the context alert filters:
Remember: only mark an alert as false positive after had to validate that URL action code and you are certain that is a false positive
In most cases ASP .NET Web APIs/Apps are not exposed to a Path Traversal attack, but a poor user input validation in file system handling action can easily expose the whole API/APP.
These are the basic rules of OWASP ZAP documents:
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
I created this SpritesCollection
component that allows us to define a collection of sprites to a prefab/GameObject and then easily swap between the sprites available.
I strongly recommend that you use the Gist Importer
to import this gist to your project, but if want to import it manually, just access the gist and add all .cs files to any folder on your Unity project, except the SpritesCollectionEditor.cs
, this one should be added to an Editor
folder.
Triangles can be classified by lengths of sides and by internal angles.
Below you can use the Triangle Classifier that I made with Unity.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
The font and the vertice sprite used are from Kenney.
]]>This happens because ShadowCaster2D
does not update its internal shape according to the SpriteShape's
form.
I created this script to help to have SpriteShape with ShadowCaster2D casting right shadows.
To use it you need to add a PolygonCollider2D or an EdgeCollider2D to your SpriteShape's
gameobject and then add the ShadowCaster2DFromCollider
component to the same gameobject.
Below you can see a video showing a scene with some SpriteShapes before and after applying the ShadowCaster2DFromCollider component.
This solution was based on this Unity Forum post: https://forum.unity.com/threads/can-2d-shadow-caster-use-current-sprite-silhouette.861256
Initially I tried to use EditorApplication.update
, but as it is only called when something changes in the inspector, it ended up not serving this purpose.
It was then that I discovered this official Unity package: Editor Coroutines
The Editor Coroutines package allows the user to start the execution of iterator methods within the Editor similar to how we handle Coroutines inside MonoBehaviour scripts during runtime.
At this time we cannot use any of the yield classes present inside the Unity Scripting API, like WaitForSeconds and WaitForEndOfFrame, except for the CustomYieldInstruction.
However, there is a specific yielding class for wait seconds on Editor: EditorWaitForSeconds
In my case, I used the yield return null
to skip a frame within the Editor and get the refresh rate that I would like.
The result of using the EditorCoroutineUtility.StartCoroutine
More details in the official documentation: Editor Coroutines
A few months ago we built a Chrome Extension to a customer to allow users to capture Netflix captions and sent them to the customer web site to study them later.
While we were developing it we needed to change some configurations on the extension to use different URLs of our API depending on what environment the Chrome Extension was using, like DEV (local), TEST, and Production.
For this tutorial, I’ll use a Chrome Extension basic sample called Hello Extensions
.
You can download it from this page https://developer.chrome.com/extensions/samples or directly from here.
Is not the intent of this tutorial explains how to create a Chrome Extension from scratch, for this you can use the official documentation: Getting Started Tutorial
At that time we did not find any builtin or other solution to make this workflow easier and streamlined, so we decided to create our own solution using Gulp
. This whole solution will be explaining below.
gulp is an open-source JavaScript toolkit created by Eric Schoffstall used as a streaming build system in front-end web development.
We decided to use gulp
, because a Chrome Extensions is a bunch of .js, .html and .json files. Perfect to use gulp.
If we just try to open our download Hello Extension
.zip file or the opened folder directly on the Chrome extensions page, it will work and you see the extension’s icon in the toolbar.
This is not a problem if you have only one environment or if you don’t need different configurations for different environments, but as I’ve already explained, we needed different configs for different environments.
To the purpose of this tutorial we will work with 3 environments: DEV, TEST, and PROD and will use our configuration file just to change our plugin hello.html
text:
Move the Hello Extensions
files to a subfolder called src
.
At the end of this tutorial, our files structure will look like this:
If have any doubt about the files structure during this tutorial, you can download the full solution in section Download
at the end of the tutorial
To allow us to build our workflow, first, we need to install gulp.
Follow the instructions described in Quick Start.
In the section Create a project directory and navigate into it
you just need to open the root folder of our file structure described in the previous section
In the section Create a package.json file in your project directory
you can use the default values for all the questions of the npm init
You don’t need to perform the section Create a gulpfile
, because we will perform it in the next section.
Create a file called gulpfile.js
in the root folder.
This is the full gulpfile.js
file that will allow our development workflow for different environments.
Now I will explain each section of it.
This section define the NPM packages need for our gulpfile.js
.
The first one is gulp, then we need the package del that will allow us to clean our dist
folder,
after gulp-merge-json to allow us to merge our environment config files. The last one is fs to allow read and write files.
This is quite simple, we are just reading the command-line argument called config
. If it is not present, the default value is DEV
.
Now we jump to the last line of the file to explain the exports.default
.
Here we are basically defining the order of each function that will be called when we run our gulpfile.js
file:
Cleans our dist
folder.
Copies all files from src
folder to the dist
folder
Transforms our config.json files, merging the source one with the environment
one.
Writes our transformed config.json
file to the scripts
folder to allow our Chrome Extension’s .js files access the config values.
Transforms our manifest.json
file too.
Watches for any change in the src
folder and automatically repeat the previous steps.
Change the content of packages.json
file to the content below to update the dependencies.
Then run the command npm install
in the root folder, after that run the command npm install gulp
.
Now, if you just run the command gulp
in the root folder, you should see an output like this:
config.json
filesWe need to create our config.json
files.
They are 3 files:
config.json
: the baseline file, our common configuration should be defined here and will be used for the DEV environment.config.TEST.json
: the file that the define specific configuration values for the TEST environmentconfig.PROD.json
: the file that the define specific configuration values for the PROD environment
If you’re familiar with Web.config File Transformations or appsettings.json
files transformation, the philosophy used here is the same: the baseline file (config.json) contains all the common configuration values, and the specific environment files, like config.TEST.json and config.PROD.json, need to define only the values that are different for that environment
Create the 3 files inside the subfolder src
.
Here is the content of each one:
Run the command gulp
in the root folder, you should see an output like this:
There is a new subfolder dist
created. Load it on Chrome Extensions page:
The plugin should work ok.
Remember to remove the plugin previously loaded on Chrome and add it again from the dist
folder.
hello.js
Create a new file called hello.js
inside the subfolder scripts
.
We will use this file to change the H1
tag inside the hello.html
hello.html
Change the content of hello.html
file to the content bellow:
This will load the .js and add an id
attribute to our H1
tag.
The first thing we need to do is change our manifest.json
file to allow the content of scripts/config.js
and scripts/hello.js
been read by the extension .js.
Open the manifest.json
and change it to the content below:
The line "/scripts/config.js"
is what we need to access the configuration values.
manifest.json
filesWe need to create our manifest.json
files, in the same way, we created our config.json
files.
They are 3 files:
manifest.json
: the baseline file, our common manifest definition, and used by DEV environment (already created).manifest.TEST.json
: the file that the define specific manifest configuration values for the TEST environmentmanifest.PROD.json
: the file that the define specific manifest configuration values for the PROD environmentCreate the 2 files inside the subfolder src
.
Here is the content of each one:
Run the command gulp
in the root folder.
This will use DEV
configuration from config.js and manifest.json
Is the same if you run gulp --config DEV
Now, try to run gulp --config TEST
.
In the Chrome Extensions page you should see something like this:
Your extension is using the TEST environment values. When you click on the extension icon on the Chrome toolbar, you should see something like this:
You can try to run gulp --config PROD
. To see the PROD
environment values on the extension.
gulp --watch
To have a fast and streamline workflow, we need that every time we change something on our src
folder, the dist
folder automatically updates and we can see the changes right on Chrome.
Run the command gulp --watch
. It’ll monitor your src
folder and update the dist
.
You can use the --config
option combine, like gulp --watch --config TEST
.
You can download the full source code of the plugin of this tutorial: chrome-extension-sample.zip.
In this .zip are all files mentioned in this tutorial, you just need to set up your gulp environment and call the npm install
and npm install gulp
before starting to use it.
In this tutorial, we learned how to create a development workflow for a Chrome extension, with different configurations and manifest.json for each environment, like DEV, TEST, and PROD.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
To activate a Focused Inspector, select an item (GameObject, Component, or Asset)
and in the context menu hit the Properties...
menu item.
More details in the official documentation: Focused Inspectors
The Unity 2020.1 was official release on July 23, 2020 with a lot of fixes, API changes, changes, improvements and features. You can find the new features directly on the Unity manual accessing this search: newin20201. To read all items, access the complete release notes.
805
79
70
215
74
Below are the features that most caught my attention and that I believe can positively impact my development workflow.
Editor: Add focused Inspector, a property editor to inspect single object.
This is something I have been looking for for a while, an inspector editor for a specific item in the hierarchy. Unity went further, allowing an editor focused on a component too.
Right-click a GameObject
in the Hierarchy view
, or an Asset
in the Project view
.
From the context menu, select Properties
.
Alternatively, select the GameObject
or Asset
and do one of the following:
From the main menu, select Assets > Properties.
Use the Alt + P / Option + Shift + P
shortcut.
Inspect the GameObject
and locate the component
you want to open a focused Inspector for.
From the component’s More items (⋮) menu
, select Properties.
More details in the official documentation.
Editor: Introduced PreviewSceneStage. Implement custom stages by inheriting from this class.
I found this interesting but did not found any good documentation on how to use it.
Editor: Support to switch between debug and release code optimizations without restarting the Unity Editor improving Unity Editor performance when compiling in release.
GI: Added all lighting settings as an asset. This will allow the user to share them between scenes or switch them out in an easy way.
This can be quite useful when we need different lighting setups for different target platforms.
Graphics: GetPixelData API was added to Texture2D, Texture3D, Texture2DArray, Cubemap and CubemapArray classes. It allows getting a pointer to the data of a particular mip level/array element in a Texture for reading/writing, and without creating any memory allocations (similar to Texture2D.GetRawTextureData).
Input: The Input System has moved out of preview.
Particles: Abiltiy to save and restore Particle System state data, making it possible to save a snapshot of an entire Particle System at a point in time. A use case for this is for efficient rewind support, by saving periodic keyframes of the particle state, to avoid full resimulations.
It seems to be possible through methods GetPlaybackState and SetPlaybackState, but they were already available in Unity 2019.4.
Physics: Add a new component ArticulationBody that helps to create articulations. An articulation is a set of bodies, some of which have their relative motion constrained. All bodies are organized in a logical tree, where parent-child relation expresses the constrained motion. Unlike regular Rigidbodies with Joints, articulations are solved with a different solver (Featherstone) in reduced coordinates which guarantees there will never be any stretch of locked degrees of freedom. Typical examples of articulations include robotic arms, ragdolls, etc.
This blog post talks in more details this new physics component: Use articulation bodies to easily prototype industrial designs with realistic motion and behavior.
- Shaders: New shader preprocessor (experimental).
- Faster shader compilation.
- Accurate include dependency tracking.
These were the features that brought my attention to Unity 2020.1.
What were the features that caught your attention?
Unity has recently launch to everyone the Unity Distribution Portal (UDP) a platform that enable Unity developers to distribute Android games beyond Google Play, into stores like Samsung Galaxy Store, ONE Store, Mi GetApps, HUAWEI AppGallery, QooApp Game Store, SHAREit Game Store, TPAY MOBILE Stores, APPTUTTI, and VIVEPORT.
The Unity Distribution Portal was created to help make publishing mobile games on alternative app stores faster and easier for developers. When you submit your game, UDP automatically creates specific builds with all required SDKs in the background, saving you a substantial amount of time when publishing to multiple stores.
Unity Distribution Portal (UDP) lets you distribute your games to multiple app stores through a single hub. UDP repacks your Android build with each store’s dedicated In-App Purchase SDK. You can manage all your store submissions from the UDP console.
This post is focused on games that have IAP and use Google Play Services, like leaderboards and achievements. If your game does not have any of these features, publish to UDP is quite a more straightforward process and probably these tips here won’t be useful for you.
As I write this post Unity is working to add even more stores to UDP. From the 9 stores supported, 7 are fully integrated with UDP. What this meaning? You just need to create your account on each store, via UDP (in some cases fill some forms and send some ID and bank docs), and UDP will perform all the steps need to make your game available on the store.
Is not the intent of this post cover everything about how to publish your game using UDP, I guess this quite well documented on UDP package documentation.
The idea of this post is to point out some tips and tricks that can help you, as helped me, to understand how to use UDP.
Please, if you did not read the official documentation yet, go and read it, then come back here. You’ll better understand this post and the whole UDP solution after educated yourself about it.
What docs says: Set up and configure UDP in the Unity Editor, implement UDP in your game project, and populate your IAP Catalog with your in-app purchase products (if any).
Go to the UDP dashboard and just create a new game. At this moment just set the game title.
There is two way to use UDP on your project: using the UDP package or using the Unity IAP. In my case, I was using IAP already in the project.
This tip is in the official documentation, but would like to reinforce it:
In the Unity Editor, to choose UDP as the Android build target, select Window > Unity IAP > Android > Target Unity Distribution Portal (UDP).
You’ll need this when you build to UDP and remember to use the Target Google Play
when you build to Google Play.
What docs says: Build your UDP Android package (apk), test it in the UDP Sandbox environment, and deploy it to the UDP console where you’ll begin preparing your game for submission to the stores.
In the Sandbox Testing
section of your game page on UDP, you will find some instructions on how to test your game on the UDP sandbox.
You will need to do this before repacking your game for the stores.
Just build your .apk
on Unity and open it on emulator (like BlueStacks) or directly on your Android device.
If you setup UDP right, you should see a screen like below when the game starts.
To login in this screen, use the accounts in the setup on Sandbox Test Account
. Remember those e-mails don’t need to exist, they are just test accounts to validate UDP setup
Perform the login on the UDP sandbox and test your IAP.
After this, you can refresh your game page on UDP and you should see the result below on Sandbox Testing
section:
If you did not setup the UDP on your project, you should see a message like this on UDP dashboard notifications apk analytics failed, caused by: analyse apk failed, caused by:Failed to find GameSettings.prop, please import udp sdk and generate GameSettings.
When UDP repacks your game with store-specific SDK it will, most of the time, change the name of the package, adding a store-specific extension, like:
.gs
to Samsung Galaxy Store.unity.migc
to Mi GetApps.qooapp
QooApp to Game StoreBecause of this extension, we will need to configure a new OAuth credential to each extension on Google API Console.
In my case on the App Signature
section of the game info, a used the recommend option Export and upload your own key and certificate
. This is the same certificate used by Google Play Services and APIs.
Go to Google API Console, select your game project on the dropdown, then go to Credentials
section, then on the OAuth 2.0 Client IDs
section select the item with the name of your game, then copy the value of the field SHA-1 certificate fingertip
.
Now, go back to the Credentials
section and click on the button Create credentials
, then OAuth client ID
.
In the new page, select Android
on the Application type
dropdown.
In the Name
field you can write anything you want, but maybe use your game name following with the extension store can help to find it later.
In the Package name
field you should add the exact package name for the specific store on UDP. For example, in my case for Mi GetApps store, the package name was br.com.diegogiacomelli.puzzimals.unity.migc
.
If you have doubted what is the exact package name for your game on a specific store on UDP, you can click on Advanced
link of that store on the Publish
tab of UDP
Now in the field SHA-1 certificate fingertip
you should paste the value you copy before from the original OAuth 2.0 Client IDs
configuration.
Then click on the button Create
.
You will need to repeat those steps for each store that change your package name.
What docs says: On the UDP console, provide app store listing information and resources for your game, such as multi-language descriptions, multi-currency price points, screenshots, video trailers, and create releases for distribution.
As you follow the UDP documentation you will reach the point will need to create your game on the UDP dashboard.
If you already have the game published on Google Play, as was my case with Puzzimals, you can use the right-top button IMPORT FROM GOOGLE PLAY
in the game page.
After the import, you can edit the information as well.
You can add the argument hl
to your Google Play url to import a specific translation, like https://play.google.com/store/apps/details?id=br.com.diegogiacomelli.puzzimals&hl=pt
to import Portuguese translation
What docs says: Sign up with the stores using your UnityID, and register your game with the app stores directly from inside the UDP console.
Go to the Publish
tab, select a store and click on Sign up to...
.
Follow the needed steps for each store you would like to publish.
What docs says: Select the stores you want to submit your game to. UDP automatically repacks your game into store-specific builds and submits them to the stores along with the relevant game descriptions.
Now that you have a valid UDP .apk
and game info filled, you can upload it on the UDP dashboard, inside the Binary
section.
Now the specific store should show a Repack game
dropdown on the UDP dashboard.
Select Repack game
and UDP should generate a new .apk
of your game with the store SDK. This .apk
will be available to download on the Advanced
link. Download it from the Download APK Certificate
.
For each store, you need to test your repack .apk
to certificate that everything in your game is running ok.
Open the repack .apk
on an emulator or an Android device.
Test your IAP and Google Play Services, they are working as expected?
For each store, you will see different screens. For example, for QooApp you will see something like this:
Now you can select Submit to Store
in the dropdown, then select the checkbox store on Publish
page and click on the Publish
button.
UDP will send the game to store review and you can monitor the progress on the dashboard.
UDP will send to you an e-mail when the game is approved.
What docs says: When your game is published, monitor its performance across the different stores from the UDP reporting dashboard.
Wait the game been published on the store and will see the Reporting Dashboard
been populated with stats about the game.
With these tips and tricks that I learned while using UDP I hope I helped you to the UDP Journey:
The theme for this game jam was “Cursed”. I took some time thinking about what game I would like to play and develop with such a theme, then I remembered how much I loved to play Lemmings and how those little ones were cursed to live in the small world where some of them should sacrifice for the others.
With this in mind, I started developing the game and things happened very well and I delivered my entry right on time. I had the finished version some hours before, but I took some time to review it.
The final result is a simple game that takes the Lemmings core mechanic and adds some twist: in Curseings, there are two kinds of walkers: the human and the cursed and you need to take each kind to different portals. If a cursed touch a human, that human becomes a cursed one too. The gamer will need to use this new mechanic to overcome some levels, sometimes the player will need to avoid they touch each other, sometimes don’t.
The game can be download for Windows and Mac here: https://giacomelli.itch.io/curseings.
If you liked the game, you can cast a vote for the game jam here: https://itch.io/jam/kenney-jam-2020/rate/737655.
]]>Recently, I was watching a video by Daniel Shiffman about Maurer Rose that inspired me to try it out at Unity.
Now you must be thinking “What the hell is a Maurer Rose?”, Well, I didn’t know it until I watched that video.
Let’s see what Wikipedia says about:
A Maurer rose of the rose r = sin(nθ) consists of the 360 lines successively connecting the above 361 points. Thus a Maurer rose is a polygonal curve with vertices on a rose.
The following gif shows the evolution of a Maurer Rose (n = 2, d = 29°) from 1 point to 361 points.
A Maurer Rose is always defined by two input parameters: n
and d
.
n
represents the number of petals. The rose has n petals if n is odd, and 2n petals if n is even (look to the gif above, n = 2, then 4 petals).d
represents the angle in degrees for each line.Let r = sin(nθ) be a rose in the polar coordinate system, where n is a positive integer.
We then take 361 points on the rose: (sin(nk), k) (k = 0, d, 2d, 3d, …, 360d), where d is a positive integer and the angles are in degrees, not radians.
This definition talks about polar coordinate system. The more important thing to know about it is this: each point on a plane is determined by a distance from a reference point and an angle from a reference direction
. So, if you want to draw a line from a point, we need to take that point, an angle, and distance to calculate the second point.
The basic formula is: r = sin(nθ)
and we need to apply it to 361 points, where:
r = sin(n * (d * point))
In C# the code is something like this:
We calculate the angle for each point, then we get the r
and find the x
and y
for the second point on the polar coordinate system.
The Mathf.PI / 180f
is to convert from degrees to radians to work properly with the Mathf functions.
Now the complete code for a MonoBehaviour that takes the formula we saw in the previous section and combine it with a LineRenderer to draw the Maurer Rose.
To use it, just create a new GameObject and add the MaurerRoseLineRenderer
component to it.
If you just hit play button, you will see something like this:
Go to the LineRenderer component and change the line width to 0.03
, then hit play button again.
If you change the values of N
and D
of the component in the inspector, you will see a new Maurer Rose been drawing.
The image below shows the result of 6 different inputs (the same inputs used on Wikipedia):
Now that you understand what is a Maurer Rose and how to code it inside Unity you can try to do some crazy things with it, like animate the input values or the number of points, or still using it to build a particle system.
Below I show my realtime try on this matter:
In this tutorial, we learned how to draw a Maurer Rose with a LineRenderer. This simple formula show how math can be used to explore creativity and beauty.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
The font used on the WebGL sample is from Kenney.
]]>Don’t let the cute look or super fun animations fool you, Puzzimals is a cube-matching game challenging you to match identical tiles in 100 levels of thought-provoking puzzles in a world map.
Packed with features and creative puzzles you can combine cubes of ice, wild cards, bombs and so much more!
Each level unlocks and increases in difficulty and offers an almost infinite number of combos.
Fans of casual puzzle games will enjoy the crisp, colorful graphics featuring cute animals and the compelling music offers great relaxation.
]]>docs
button to the documentation page.
I made the gist below that uses the plugin to add a social share component to any game object.
The Unity3D has a code interface for questions regarding social features in games, these interfaces are called Social API.
It provides a unified interface to different social back-ends, such as GameCenter, and is meant to be used primarily by programmers on the game project.
When you use it on a real platform, you will need to use some plugin that implement those interfaces, in the case of Android this job is done by the Google Play Games Plugin for Unity.
First of all you need to create the game on Google Play Console as any other game and make an app release (an Internal test track
already works).
Then go to the Services & APIS
menu, after enter on the Google Play game services
.
Fill only the required information for test, then go to menu Linked apps
and link it to your game on with the app on Google Play Console.
In the Game services section
open the Achievements
menu and add some achievements.
I found empirically that test achievements that did not have an icon set on the Google Play Console did not show the standard Google Play Games UI when unlocked and were automatically locked again about 1 hour after they were unlocked.
To install the Google Play Games Plugin for Unity
, you should follow the steps in Plugin Installation and Android Setup sections.
One thing can be a little confusing is about the certificate fingerprints.
Go to Google Play Console on App Signing
section and use the option App Signing by Google Play
Now you need to copy your SHA-1
fingerprint of your Upload certificate
from App signing
section.
Then go to your OAuth credentials on Google API console.
In the Google API console
, select your game on the top dropdown, then go to Credentials
menu and edit the OAuth 2.0 Client IDs
. Paste the fingertip on the field SHA-1 certificate fingertip
and save it.
remember to remove the
SHA-1
prefix, if you copied it together.
Now we have the basic setup done, we need to write some code to test it on an Android device:
Add the script above to a game object in the scene, build and run the game on a Android device.
If everything was done correctly, you should see a screen like this:
With this basic setup you can start to use the other features from Google Play Games Plugin for Unity plugin, as:
Learn more about then on plugin GitHub page.
Below a list of tips and tricks for non-common scenarios.
If anything goes wrong on Android device you will need to see it logs, to do so, you can use adb logcat:
On MacOS, this shell do the job:
cd /usr/local/Caskroom/android-platform-tools/29.0.5/platform-tools/
./adb logcat -s Unity PackageManager dalvikvm DEBUG
If you are using a different version of Android Platform Tools you will need to change the
29.0.5
version.
If your game is using Assembly Definition you will need this step.
The plugin code come without any assembly definition, so to use in our code you will need to create two assembly definitions to the plugin:
Go to the Assets/GooglePlayGames
folder and create a new Assembly Definition
called GooglePlayGames
.
Go to the Assets/GooglePlayGames/Editor
folder and create a new Assembly Definition
called GooglePlayGames.Editor
.
Now, go to your game assembly definition and add a reference to the GooglePlayGames
.
After this, if you receive some compilation error about
BasicApi.Nearby
namespace, just close and open Unity again.
If you see the the log message above on the adb logcat, you need to check on Unity Player Settings / Publishing Settings
if the minify settings are set up correctly as describe here: Play Games Services Proguard configuration
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
A SQL Server Database Project has a solution (.sln) as any other kind of Visual Studio projects and we need to build it to generate our DACPAC.
A data-tier application (DAC) is a logical database management entity that defines all of the SQL Server objects - like tables, views, and instance objects, including logins - associated with a user’s database. A DAC is a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC
The variables used in the script portions are described in the
Complete azure-pipelines.yml
in the end of this post
With our .dacpac file built, now we need to generate our .sql database script. To perform this operation we need the sqlpackage.exe, this command-line is available in some Azure Pipelines images, as vs2017-win2016
, so you need to use it on your azure-pipelines.yml
file:
Add the CmdLine
task below to generate the .sql database script:
Now we need to publish the .sql file to the build artifacts:
In the approach described above the .sql files is not automatically applied to the database, so someone need to download, check and apply the file manually against the target database. If you want to apply the database script automatically you can change CmdLine
task that generate the .sql file to update the target database using the action:publish
option of sqlpackage.exe.
The HTTP Archive (HAR) format as defined in the W3C Specification is an archival format for HTTP transactions that can be used by a web browser to export detailed performance data about web pages it loads.
HarSharp is a small and easy-to-use library to parse HAR files to .NET objects that I created some years ago for .NET Framework, today I released the version 2.0.0 with support to .NET Core / .NET Standard.
Har File Analyzer with a HAR file loaded
If you want to use the the Har File Analyzer or want to know how to create a Win Forms app with HarSharp, you can download to full solution here: HarSharp_wingui.zip
]]>HarSharp is a small and easy-to-use library to parse HAR files to .NET objects that I created some years ago for .NET Framework, today I released the version 2.0.0 with support to .NET Core / .NET Standard.
NuGet package:
install-package HarSharp
More details at: https://github.com/giacomelli/harsharp
I would like to thanks to André Costa and mmoreira2000 for made pull-requests for bug fixes and Olivier Beaudoin for contributing with the support to .NET Core / .NET Standard.
]]>Recently a friend asked me about the steps I follow to create dotnet new templates, while I was describing the steps to him I realized that information could be usefull to someone else or even to my future self.
Sometime ago I created some dotnet new templates for GeneticSharp: dotnet new templates for GeneticSharp and I will use that template as samples for the steps below.
Nowadays there are two main ways to build a dotnet new template: creating a .nuspec
file or defining the package properties directly inside your .csproj
.
To GeneticSharp templates I used a .nuspec
.
When I was trying to create the first templates I used this Microsoft documentation to learn about it: Custom templates for dotnet new.
Templates
folder: sample.csproj
or create a .nuspec
: samplecontent
folder inside the Templates
folder: samplecontent
for each template and put your template project source code there: sampletemplate.config
then create a template.json
file: sampleIn this .json
you should define things like template name (shortName) and what it’s root namespace that will be replaced when a new project use this template (sourceName)
.nupkg
with the dotnet pack
or nuget pack
.nupkg
file: dotnet new -i your_templates_file.nupkg
dotnet new template_shortname -n new_project_namespace -o output_folder
Here a .cmd
sample for last 3 steps: sample
.nupkg
on http://nuget.org or to your private NuGet feed: sampleThat’s it, with just 9 steps you can create your own dotnet new template and jumpstart your next project setup.
]]>The sample lines will result in the same string
The $ special character identifies a string literal as an interpolated string.
]]>String interpolation provides a more readable and convenient syntax to create formatted strings than a string composite formatting feature.
Expression body definitions for methods and read-only properties are supported starting with C# 6. Expression body definitions for constructors, finalizers, property accessors, and indexers are supported starting with C# 7.0.
So you can use the expression body definition for other members, like properties:
NVARCHAR (4000)
.
This happen because Dapper cannot infer what is the exactly type and length of the table column on database.
The downside of the NVARCHAR (4000)
in arguments is that the database can choose a bad execution plan for the query.
To avoid this you can pass the type of the argument to Dapper:
To make this solution easier to use I made two extension methods ToVarChar
and ToNVarChar
:
NO!
The Azure Service Bus maximum message size is 256 KB for Standard tier
and 1 MB
for Premium tier
(https://docs.microsoft.com/azure/service-bus-messaging/service-bus-quotas).
A simple solution to overcome this limitation is split the messages in blocks lower or equal to 256 KB size.
Below is a simplified version of the code that I used for Standard tier case:
]]>Recently I had to distribute a T4 template inside a NuGet package, so developers who installed the NuGet package would get, in addition to the library assemblies, a T4 template to help generate some scaffold code.
Nowadays there are two main ways to build a NuGet package: creating a .nuspec
file or defining the package properties directly inside your .csproj
.
The project that I want to distributed via NuGet is a .NET Standard library, so I decided to use the the dotnet core way: define the package properties inside the .csproj
file.
We will create a sample project to use during the post. Open a command-line/terminal and type:
Let’s say that the T4 Template that we want to distribute is located inside a folder called Templates
and the file is named Scaffold.tt
.
Now open the SampleNugetProject.csproj
in the Visual Studio and add a T4 Template
:
Add new item…
Our Solution Explorer
should look like this:
We need to create a .targets
file, it will be used to copy the T4 Template from the NuGet package folder to the target project in the build process:
Create a file with the same name of your NuGet package project, but with the extensions .targets
.
For our sample the file will be SampleNugetProject.targets
:
In the project .csproj
just add this ItemGroup
:
Inside the folder where are our .csproj
and .targets
files, type:
This will create a file bin\Debug\SampleNugetProject.1.0.0.nupkg
.
Create a new .NET Core project to test the package:
Now, add the NuGet package to the target project.
To test our NuGet package without publish it to a NuGet source, just use the
--source
option and point it to theSampleNugetProject.1.0.0.nupkg
folder.
If we look to our TargetTestProject
there is no Scaffold.tt
file yet, this is because it will copied from NuGet package folder to the project structure during the build process.
Just build the target project and the Scaffold.tt
will be copied.
That’s it, now we have created a NuGet package that copy a T4 Template to target project’s structure.
You can expand this solution to copy other kind of files to the target project.
If you want to dig deeper into generating NuGet packages using dotnet-cli, starts with Create a NuGet package using the dotnet CLI.
]]>Phi Dinh remembered a Twitter moment with tips and tutorials about the shaders used on Recompile. (Recompile Unity Tips and Tutorials).
Screen.cutouts for iOS/Android, Burst Compiler updates, TypeCache API in Editor code, PhysX Cloth Solver update, DSPGraph audio mixing/rendering engine, Intel® VTune™ Amplifier support, IDE support moving to packages, .NET 4.x is now default and Incremental Garbage Collection.
Andy Touch created a Twitter thread about the different 2D Light Types available in Unity 2019.2 (2D Light Types).
Dilmer Valecillos made a video showing how to use low polly assets for VR (Unity3d Oculus Quest Development - Adding A Low Poly Asset for VR Usage from the Asset Store).
With the release of Unity Editor 2019.1, the Shader Graph package officially came out of preview! Now, in 2019.2, we’re bringing even more features and functionality to Shader Graph.
License: (CC0 1.0 Universal) You’re free to use these game assets in any project, personal or commercial. There’s no need to ask permission before using these. Giving attribution is not required, but is greatly appreciated!
In conclusion then, based on the above tests, if you are going to crunch through work without the Burst Compiler, maybe look at using something like Tasks first before opting for Jobs. If you are going to use the Burst Compiler, use the Job System.
In our first 2D Pixel Perfect guide for retro games, we showed you how to set up the 2D Pixel Perfect tool and how 8-bit graphics were made back in the day. In this post, we fast-forward to the 16-bit era. With the help of Mega Cat Studios, you’ll learn how to create authentic art for Sega Genesis (or Mega Drive) and Super NES-style games using Unity settings, graphics structures, and color palettes
ML-Agents Beta 0.9.0
(ML-Agents Beta 0.9.0)
Generative Adversarial Imitation Learning, pre-training for the PPO trainer, training generalized reinforcement learning agents, options for Nature and ResNet CNN architectures.
The list below is not intended to be right and definitive, there is no silver bullet, but just things that I learned and tested in more of a decade working with web apis. Things that worked for me and my projects can easily not work for you and your projects, because context is other and challenges too.
Besides, if you disagree with some of this points, please let a comment in the end of the post, we could learn more with it!
First of all, build a software that really attend the requisites, that has good code quality and has good performance is more important than follow someone’s guidance, with this said I always try to implement REST web apis, but exceptions exists and we need to work with them, so don’t try to force some operation than will don’t fit well on REST way.
Remember, REST is a style architecture and as every architecture, you can (and most of time should) adapt it for your project needs and capacities.
Probably the most important rule about web api is to respect the meaning of verbs, this will really make the life easer for those whom are consuming your web api.
GET
: to retrieve something.POST
: to create something.PUT
: to complete update something.PATCH
: to partially update something.DELETE
: to delete somethingThen, combining with URLs, you get a really clear way to work:
GET
POST
PUT
PATCH
DELETE
I try to keep the range of returned status code small, the smallest one has only 3 status:
200
: succesful request.400
: a client error.500
: a server error.Of course, you can extend this one, like use 201 (created)
to return status for POST verbs, and 401
to unauthorized requests, but try to keep this list small, so it will be easy to who consume your API take decisions about what to do with different status code.
Another good pratice is return an error code
and an error message
in the json
of requests that result in 400
status code (client error), this way the client can react or display different informations to the end user.
In most cases, 400
status code are about some business logic rule that need to be respect to result in a sucessful request, so provide some information to client can help it to understand what is wrong in the request.
I see a lot of developers using v1
inside the code to “versioning” their web apis, I really dislike this method, I can understand it is easy to deploy a new web api version to the same virtual application using this technique, but we need to agree that we do versioning using some SCM, like Git, SVN and Mercurial with everything else that correspond to our code, why should we do versioning inside the code in the case of web api?
In most of case your web api will have only one version, especially when you are developing a SPA
app and it is the only client of your web api.
Versioning of APIs is a controversial topic—you will find a lot of contradictory guidance on the internet. The pattern that is most commonly practiced is probably the one with a version identifier in a path segment of URLs. Since there is little consensus on versioning, simply offering our opinions on the topic may not be very helpful, but we offer two thoughts:
- Doing nothing at all for API versioning is an intelligent approach that is getting more attention
- Links and version identifiers in URLs make awkward bedfellows
In this case, the approach that I use is create a new site/virtual application every time that I need to publish a major version of my web api, so, in the code of the routes there is no v1
, v2
and so on.
Let say I have a web api publish on http://diegogiacomelli.com.br/sample-api/v1
, than I made some breaking changes to the public interface of the web api and need to publish another version in a separated url to avoid any impact to the users that still use the old v1
. I will just create the new v2
virtual application, and the urls will be:
http://diegogiacomelli.com.br/sample-api/v1
http://diegogiacomelli.com.br/sample-api/v2
In a moment in future you will want to remove the support to the v1
version, in that moment you could configure your virtual application to redirect (status code 301
or 302
) to the next/latest version.
Of course, it’s a good pratice too have some public changelog about your web api versions releases to notify the clients developers.
In the field of how to describe a web api REST I can cite Swagger
. It is not a attempt to create a WSDL like to web api REST, but it is a good attempt to create an open standard for describing web apis REST.
Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services.
I use Swagger a lot and really love it, mainly because Swagger UI that allow you generate a nice live console and documentation for your web api.
There are many implementations of Swagger for most of languages: C#, Java, Python, Ruby, etc.
If you are using ASP .NET Web API, there a some projects to auto generate the Swagger specification, like Swashbuckle.AspNetCore.
Swagger is very popular nowadays, but if you never see it in action, take a look on Marvel API that use Swagger and Swagger UI.
To keep our lifes as developers easier is good to define some conventions of how our web api REST will behave, the best effort I know in this field are the very good Apigee e-books. The e-books are not an attempt to create a bible or a mantra about how to design your api, but rather a collection of conventions observed in large web REST apis, like Twitter, Facebook, Linkedin, Google, etc.
Most of the good pratices that I used, tested and mentioned in this post I learned from my experience and Apigee books.
I probably forgot some learnings that I still use in my web apis developing, maybe because it’s so common to perform it that I even do not notice them, but the ones cited above I think are the most remarkable.
I really recommend you to read the books bellow, they have detailed a lot of good practices about how do develop good web apis and can help you to design yours.
The success of an API design is measured by how quickly developers can get up to speed and start enjoying success using your API.
When you design any interface, you should try to put yourself in the shoes of the user
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
Just install the ToastyNotification.package
from the repository release page.
Now every time your game raise an error on Unity’s editor console log, the Toasty Notifcation will warning you 😉!
]]>SIGGRAPH 2019 is taking place in Los Angeles, between July 28 and August 1. Unity has made a page where you can see the Unity’s presentations calendar (Unity at SIGGRAPH 2019).
Kin enables the free and instant transfer of value between users, no matter how small the amount. This lets users give something more meaningful than a like, making your app even more fun and engaging.
Sykoo made two videos, one showing his top 5 assets for Unity in 2019 (5 AWESOME ASSETS FOR UNITY 2019!) and a second one about the new Multiplayer system that Unity is working on (NEW MULTIPLAYER in Unity 2019 – Connected Games (Overview)).
Tyler Hurd presented to the world a tool that him have been working: Actuator leverages real-time physics and tracked VR devices for rigging, animating and puppeteering. (Actuator)).
Cake (C# Make) is a cross-platform build automation system with a C# DSL for tasks such as compiling code, copying files and folders, running unit tests, compressing files and building NuGet packages.
SonarCloud is a cloud service offered by SonarSource and based on SonarQube. SonarQube is a widely adopted open source platform to inspect continuously the quality of source code and detect bugs, vulnerabilities and code smells.
Cake
on your .NET Core project before start this tutorial. Follow this Setting Up A New Project.In project root folder, create a file called build.cake
.
Add the lines below to the file:
We’ll use those tools and addins to create the complete .cake script.
Now we need to set some variables:
Where:
[SONARCLOUD-USER-TOKEN]
for a token you can generate at the page https://sonarcloud.io/account/security/.GitBranchCurrent(".").FriendlyName
will pass the right branch name to SonarCloud, otherwise you can set it manually.If you are using AppVeyor
you can use line bellow to allow discover the branch name locally and in the AppVeyor:
The first task definitions will build our .NET Core project:
Now we will define the task to run our tests:
This task use Coverlet
to collect the code coverage. Install it on your test project using the Coverlet
NuGet package:
install-package coverlet.msbuild
The lasts tasks to define are responsible to scan the project and send the build and tests results to SonarCloud:
Where:
Administration / Update Key
projects
page is https://sonarcloud.io/organizations/YOUR-ORGANIZATION/projects
, then your Organization value is YOUR-ORGANIZATION
.The last part of the build.cake
file define the order that tasks will run:
Now, every time you want to build, test and publish the results to SonarCloud you need to type this command on your project root folder:
MacOS:
./build.sh
Windows:
Open a powershell
terminal:
.\build.ps1
When the build finish, you should see something like this:
Cake build
Then open your project on SonarCloud:
SonarCloud dashboard
That’s it, now your project is ready to using cake to publish results to SonarCloud.
If you are using git, you can add this lines to your .gitignore file:
tools/**
!tools/packages.config
As can be seen from the video above the extension can be used in two ways after adding the Scene Preview
window to the editor (via the Window / Scene Preview
menu):
Scene
and clicking the Capture screenshot
button in both edit mode and play mode.Scene
in play mode and if there is still no preview screenshot for the scene, a screenshot will be taken automatically after 10 seconds.Below you can see only the code of ScenePreviewEditor.cs
. There are 3 more files for this extension.
Download the full gist or use the Gist Importer to import the complete Scene Preview Window
for your project.
Remember that you need to save the above .cs files inside any Editor
folder
What we found when talking to developers about their experience using Asset Bundles was that almost everyone doing so successfully were more or less writing the same high level systems a thousand slightly different ways. Addressables was born out of this obvious need for a framework to solve a set of common problems related to addressing, building, and loading assets.
Cyan made a blog post about Voronoi noise, a type of noise based on cells, and shaders (Voronoi).
We’ve produced three kits: A puzzle game, an FPS and an RPG. Get a brief introduction to each game genre and learn the basics of Unity without writing code. The kits offer beginners a good way to create your own game in under an hour before embarking on something with a broader scope like the Game Kits.
How do a businessman and a software developer working at an improvised kitchen table in St. Petersburg, Russia go on to produce sophisticated social games and mobile RPGs with some of the best graphics in the industry? Meet Sasha Pavlov and Oleg Sysoev, who started Playkot Games in 2009. Since then, they’ve grown their team to 150+ by carefully combining the right people, adopting Unity, and implementing the right strategies to capture significant market share around the world.
Faces of Unity
, now presenting Dave Hunt, Lead Technical Artist in Copenhagen (Faces of Unity – Dave Hunt)).
Learn more about Dave as he shares about his career journey and work-life balance as a parent here at Unity!
Brackeys published a video about learnings in 10 years of game development! (What I learned after 10 Years of Game Development!).
For this fourth post the chosen one is the Design of a Warehouse Order Picking Policy - Using Genetic Algorithm
.
The main motivation of the author about this paper was: improving order-picking process is considered which is an essential operation critically impacting warehouse and supply chain performance.
The study is composed of two phases: In the first phase, the shortest path between each pair of items is determined in a pick list leading to the second phase of determining the sequence of all items to be picked. A mathematical model is utilized to find the shortest path between each item pair in a pick list.
A genetic algorithm based approach is developed to decide the picking sequence of all items in an order list, given the shortest distance between each pair of items. The performance of the proposed algorithms is compared to popular heuristics used for multi-block warehouses, namely: S-Shape and Largest Gap.
Ahmet Can Saner, the paper author made great contributions to GeneticSharp, that him developed during his master degree, like: Displacement Mutation, Insertion Mutation and Partial Shuffle Mutation (PSM).
Ahmet Can Saner, with Dr. Zehra Düzgi supervision, wrote the paper for his master degree at Istanbul Bilgi University.
You can access the full paper directly on Istanbul Bilgi University library.
]]>EditorGUIUtility.ObjectContent
The icons used in the video are Font Awesome icons and I used fa2png.io to convert them to .png.
Remember that you need to save the above .cs inside any Editor
folder
Alexander Ameye made a tutorial about edge detection shaders with Shader Graph and how to use some Lightweight Rendering Pipeline customization to generate DepthNormals (Edge Detection Shader).
Render Textures are stored and handled on the GPU, which means we can’t access the data from them in a C# script which runs on the CPU – and you shouldn’t really need to for most effects. However, I wanted my player to be able to interact with water which was based on a Render Texture and only spawn particles if the player was in the water, along with some other effects.
What happens when you put 700+ people from Unity and our ecosystem in a secluded location once a year and tell them to solve hard tech, coding, and process problems? Their instructions? Unleash your creativity, collaborate like crazy, and embrace diverse perspectives.
This year at the Cannes Lions International Festival of Creativity, Unity showed up as the leading real-time 3D creation platform.
Garrexus made a breakdown on the Screen space Cosmic shader (Screen space Cosmic shader breakdown (Unity/ASE, Shaders/Textures included)).
Alexander Ameye again, now with a tutorial about collision effect (Collision Effect).
A number of judges evaluated all of the impressive submissions to pick the winners of the latest Asset Store challenge: Use Substance Painter to texture stunning assets that expand the futuristic world of Buried Memories: Yggdrasil.
Ray tracing hardware acceleration (RTX) is a newly available feature that allows graphics developers to increase their image quality. While solving complex problems, properly integrating this new set of APIs (DXR and Vulkan ray tracing) in a game engine is tricky, especially so when the engine supports anything from mobile to virtual production.
Brackeys published a video about gamepad input (CONTROLLER INPUT in Unity!).
Now we have 4 dotnet new
templates, besides the GeneticSharpConsoleApp
, GeneticSharpTspConsoleApp
and GeneticSharpTspUnity3d
already existent, a new template for a Blazor client app was added: GeneticSharpTspBlazorApp
:
dotnet new -i GeneticSharp.Templates
dotnet new GeneticSharpTspBlazorApp -o TspBlazorApp
cd TspBlazorApp
dotnet run
If you want to know more about how to use GeneticSharp with Blazor, take a look in this tutorial TSP with GeneticSharp and Blazor.
Four papers and one project were added to the list:
Only GeneticSharp:
install-package GeneticSharp
GeneticSharp and extensions (TSP, AutoConfig, Bitmap equality, Equality equation, Equation solver, Function builder, etc):
install-package GeneticSharp.Extensions
You should use the UnityNuGet to install GeneticSharp directly from NuGet.
Or you can use the latest GeneticSharp.unitypackage available on our release page.
Let’s evolve!
]]>According to Wikipedia The travelling salesman problem (TSP) asks the following question:“Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?”
TSP is a classic sample to test some optimization techniques, as well it’s fairly used to demonstrate how to implement a genetic algorithm. For these reasons I will use it to show you how to implement a basic genetic algorithm in Blazor using GeneticSharp.
This post is a like a mirror of the TSP with GeneticSharp an Unity3D. It’s using the same format to teach TSP and GeneticSharp, but instead of Unity3D, this one is about Blazor.
You can see the final result of this tutorial on http://diegogiacomelli/apps/geneticsharp-runner-blazorapp.
Note that the performance presented on this demo is not the performance that GeneticSharp presents in other apps kinds, like a ASP .NET Core backend app, a console app or in a Unity 3D game. As WebAssembly do not support create a new thread, we get limited to use a Timer to made this sample interactive. More details about this in next sections of the post.
To better understand this tutorial, you need to have some experiences/knowledges in:
We will perform a very basic use of Blazor and everything you need to complete this tutorial will be explained or provided by the code samples, but if you want to find out better what’s happening under the hood, take a look on Blazor Get Started page.
If you need an introduction to genetic algorithms, take a look at this tutorial Function optimization with GeneticSharp.
Open a terminal and type:
dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview6.19307.2
This will install the latest Blazor templates for .NET Core.
This tutorial is based on Blazor preview6
. If you are doing this tutorial using a newer Blazor version and have encountered some problem, leave a comment at the end of the post or contact me on Twitter.
Now we’ll create a scaffold Blazor app using the blazor
template:
dotnet new blazor -o TspWithGeneticSharp
cd TspWithGeneticSharp
dotnet watch run
Wait for the message Application started. Press Ctrl+C to shut down
show up in terminal, then open the url http://localhost:5000 on your browser, you should see something like this:
Open a new terminal in the same folder and type:
dotnet add package GeneticSharp
This will install the latest GeneticSharp NuGet package in your newly created Blazor app.
I recommend to you use Visual Studio Code to open the project. There are some cool VS Code extensions to work with Blazor.
In the same terminal where you added the GeneticSharp package, type:
code .
This will open the Blazor project with VS Code.
In the root folder of your Blazor project create a new subfolder called Tsp
. We’ll add all our C# classes inside this folder.
The chromosome represents a solution of the problem we are trying to solve. In our case the TSP chromosome should represent “the shortest possible route that visits each city and returns to the origin city”.
To represent the cities route each gene of our chromosome will represent an index of a city in the route.
Create a file called TspChromosome.cs
:
The next step is define our genetic algorithm fitness function, but first we need to create a simple class to represent a city on a 2D space.
Create a file called TspCity.cs
:
Now we need to evaluate the TspChromosome
.
Our fitness function will evaluate the chromosome fitness based on the total distance to reach all cities in the route represented by the chromosome. The shorter the distance, the better the chromosome.
Create a file called TspFitness.cs
:
In this step we need to configure our genetic algorithm using the TspChromosome
, TspFitness
and some classic GA operators already built in GeneticSharp.
Create a file called TspGA.cs
:
GeneticSharp can be used as single threading or multithreading to evaluate chromosomes with the fitness function, but WebAssembly (and Blazor) can use just the UI thread, in this scenario when we call GeneticAlgorithm.Start
method it freezes the UI until the GA finish.
To avoid this behavior, the solution is: run each generation of the GA inside a step in a System.Threading.Timer
as you can see in the TspGA.Run
method.
APIs that aren’t applicable inside of a web browser (for example, accessing the file system, opening a socket, and threading) throw a PlatformNotSupportedException. (https://docs.microsoft.com/pt-br/aspnet/core/blazor/?view=aspnetcore-3.0)
Inside the folder Pages
create a file called Tsp.razor
:
As we need to interop with JavaScript to manipulate DOM, we will use some helper JS functions. Add the file canvas-helper.js
inside the folder wwwroot/js
:
Open the file index.html
and add the tag below inside the tag head
:
It’s awesome we can now use C# in the browser with Blazor. But unfortunately we can’t do everything with it, yet. Currently, WebAssembly isn’t able to directly access the DOM API, which means that Blazor isn’t able to either. (https://chrissainty.com/blazor-bites-javascript-interop/)
DOM integration is in the WebAssembly roadmap: https://webassembly.org/docs/future-features/.
Check your terminal window where the command dotnet watch run
is running, if there is no error in that window you can access the url http://localhost:5000/tsp.
Hit the Run
button and take a look on the browser console window, you will see the distance to reach all cities getting smaller as the generations ran.
This is not a tutorial about Blazor good pratices, so everything here is done in the simplest possible way to introduce how to use GenticSharp with Blazor. I do not talk about things you should use when working with Blazor, such as separate logic from UI and use Blazor components.
Now our GA is running inside the browser, but it needs to display the cities route better. We need to create a visual representation to the cities.
In the Tsp.razor
add the method DrawCitiesAsync
:
Then call it from OnAfterRenderAsync
method, after the clearCanvas
call:
Reload the url http://localhost:5000/tsp.
Now you should see something like this:
In the previous step we drawn the cities and we have the visual of the problem: the cities.
Now we need to draw the solution: the route represented by the best chromosome of each generation.
Add the following method to the Tsp.razor
:
Then call it from OnAfterRenderAsync
method, after the DrawCitiesAsync
call:
Reload the url http://localhost:5000/tsp again, and hit the Run
button, now you should see the route been optimizing as the generations are ran:
With only 4 C# classes, 1 JS file and 1 Blazor page we built a pretty nice sample of genetic algorithms using Blazor with GeneticSharp. Now you can improve it with your own ideas or use some of mine ;):
The full source code used in this post can be download or fork from this Gist: https://gist.github.com/giacomelli/9addc5182943ba25eb82201e30c76418
Let’s evolve!
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
EditorApplication.hierarchyWindowItemOnGUI
:
Sykoo released a YouTube video about the new Unity Terrain Tools package (Build Beautiful Terrains with Unity 2019! – New Terrain Tools Package).
Based on a set of guiding principles focused on delivering value to teams with diverse skill sets, we are designing a brand new experience for Collaborate.
The Animal-AI Olympics is an AI competition with tests inspired by animal cognition. Participants are given a small environment with just seven different classes of objects that can be placed inside. In each test, the agent needs to retrieve the food in the environment, but to do so there are obstacles to overcome, ramps to climb, boxes to push, and areas that must be avoided.
In-app purchases are an important part of your monetization strategy, but implementing them correctly can be challenging. Understanding how to create, surface, and optimize them without disrupting the game experience takes hard work.
Sykoo published a YouTube video about how to make NPCs for your games (How to Make AWESOME NPCs for Your Game!).
Brackeys made a video about how to add physics to ragdolls in Unity (RAGDOLL PHYSICS in Unity!).
It is important not to forget that one advantage of using Unity in architectural visualization is that you can get different types of output from the same scene. For example, it is possible to make a video in cinematic mode, as well as in virtual reality mode.
Alexandre Mutel released the the first preview version of UnityNuGet
providing a service to install NuGet packages into an Unity project via the Unity Package Manager (UnityNuGet).
For this third post the chosen one is the Frixel: 2D framing / pixel structural optimization
.
Frixel is a plugin for Rhinoceros 6, a 3D computer graphics and computer-aided design (CAD) application.
A finite element analysis program that’s so easy a 5 year old could use it.
Provided only a closed curve representing a building massing and location for core, Frixel generates a 2 dimensional grid and runs structural analysis on it.
You can tweak grid size, gravitation magnitude and lateral wind force to see how your design perfoms under different conditions. Additionally, it can run structural topology optimization to improve its mechanical performance by adding bracing in appropriate places.
GeneticSharp was used to design the structural chromosomes and fitness function to allow framing / pixel structural optimization through the Frixel optimizer.
According to the developers, these are the features of Frixel:
The project was developed from scratch under 24 intense hours at the beyondAEC Hackathon July 2018, in Boston MA by Leland Jobson and Emil Poulsen.
You can access the project repository at: https://github.com/EmilPoulsen/Frixel.
]]>#unitytips Museum
, a collection of best #unitytips.
You can access it directly in my site, in this url: http://diegogiacomelli.com.br/apps/unitytips-museum
Every week all #unitytips retweeted by @unity3d will be included to the collection.
If you know a really good #unitytips (at least 50 retweets) that should be added to the collection, please follow one of options above:
Blazor can run your client-side C# code directly in the browser, using WebAssembly. Because it’s real .NET running on WebAssembly, you can re-use code and libraries from server-side parts of your application.
In a future post I will discuss what I learned about Blazor while developing the #unitytips Museum, but I can give a spoiler: I really liked it!
]]>Unity released a YouTube tutorial about how to create a portal effect in Lightweight Render Pipeline with Shader Graph (Making Portals with Shader Graph in Unity! (Tutorial)).
Dilmer Valecillos posted a video on augmented reality while implementing a dragging feature with ray-casting (AR Foundation with Unity3d and Adding Dragging Functionality with AR Raycast and Physics Raycast).
Bare Bones Scriptable Render Pipeline. Use it as a base to create your own.
An intuitive and lightweight editor for quickly creating smooth paths in the editor. You can easily make objects move along these paths, or use them as a guide for spawning objects and generating meshes.
Unity, started its annual survey asking for community feedback (survey).
Sykoo published a YouTube video showing what he learned after making more than 100 game levels (What I Learned after Making 100+ GAME LEVELS! (Unity Level Design)).
Unity Learn Premium
(Introducing Unity Learn Premium: Get direct guidance from experts).
We’ve heard your requests for more advanced and in-depth content, and resources for specific industries like automotive, architecture, or media and entertainment. And we’ve also heard that you want access to experts who can give you immediate guidance and feedback on specific challenges or questions
Developing a strong game, large or small, depends on having a dedicated team of skilled and specialized people who share a vision and care deeply about what they are making. The development of Serekh was a combined effort by Edvige Faini, the Concept Artist, and the Unity Icon Collective. Some of the most significant challenges on this project were conceptual rather than technical.
dotnet new
command to start an Unity3D project with GeneticSharp.
Sometime ago I created a NuGet package called GeneticSharp.Templates, this package allows developers to install GeneticSharp templates using the dotnet new command.
Among the availables templates in GeneticSharp.Templates
, there is one specific to create an Unity3D project with GeneticSharp. Besides this, the project has a sample of Travelling Salesman Problem (TSP) using GeneticSharp and can be used as a scaffold to starting use genetic algorithms in your games.
Create an Unity3D project using dotnet new is fast and simple, depending of your machine performance, you’ll have the sample running in less than a minute.
Open a terminal/prompt and type:
dotnet new -i GeneticSharp.Templates
dotnet new GeneticSharpTspUnity3d -o GeneticSharpSample
The argument -o
is the the output folder of the Unity3D project
Now you can open the GeneticSharpSample
in Unity3D editor.
Open the MainScene
scene and hit the play button, you should see something like this:
Thats it, now you can use this sample to understand how to use GeneticSharp in your game.
If you want more details about the code and the sample itself, please, take a look on this tutorial TSP with GeneticSharp and Unity3D.
If you need an introduction to genetic algorithms, this tutorial could help you Function optimization with GeneticSharp.
Let’s evolve!
]]>All Gists are imported to folder Assets/Gists
Sykoo made a video about the Buried Memories Volume 2: Serekh (Buried Memories: Serekh – New HDRP Asset Pack for Unity! (Overview)).
Unity 2019.3 will add support for using Unity as a library controlled by native Android/Java and iOS/Objective C apps so you can easily insert AR and other Unity features.
You can now easily add a Unity scene (e.g. ARCore scene) to any native or Xamarin app via .aar file.
In anticipation for Cannes this year, we surveyed 1000 creatives within advertising and marketing to gauge a better understanding on their comfort level, technical competency, and enthusiasm towards AR, including what it means for the future of storytelling. And what we found was very telling
Harry Alisavakis, released another tutorial in his series VFX Master Shader
(My take on shaders: VFX Master Shader (Part II)).
Real-time 3D is changing everything, in much deeper ways than most people realize. Forged in gaming, this technology continues to transform the way games are created, operated and monetized. Its impact now also extends to industries of all kinds, from film to automotive.
What do System Shock 3, Oddworld: Soulstorm and Harold Halibut have in common? Well, all of these incredible-looking upcoming titles are built on Unity, and more specifically will utilise Unity’s upcoming High Definition Render Pipeline or HDRP for short.
Verified Solutions Partners
(Unity’s new partnerships bring verified solutions to developers across industries).
Verified Solutions Partners represent a collection of 3rd-party SDKs, plugins, editor applications, cloud services, and more that enable the success of your project while ensuring deeper technical alignment with partners’ products and services to ensure 0% developer downtime. Partners accepted into the program go through a verification process to ensure their SDK or software is optimized for the latest version of Unity.
Elena Nizhnik released a new Community Component
post, highlighting some games, posts, videos and tools (Community Component – audio design, pixel games, custom tools and Best of Made with Unity on YouTube).
Sykoo again, now with a timelapse level design video (LEVEL DESIGN IN UNITY! - Village & Castle Scene (Timelapse)).
For this second post the chosen one is the Context-Sensitive Code
Completion: Improving Predictions with Genetic Algorithms
.
The main motivation of the author about this paper was: Current methods of training code completion systems can possibly be improved in order to reduce prediction errors. This requires that the amount of information considered in a pattern is optimized. The question raised is then: how should the training of predictive models be focused in order to increase prediction quality?
This thesis will aim to answer the question: to what extent is it possible to improve predictions of existing state-of-the-art code completion systems with a genetic algorithm?
GeneticSharp is an open-source Genetic Algorithm library for C#, released under the MIT license (Giacomelli). It has an extensible interface that allows for most, if not all, functionality to be implemented from scratch via interfaces or leveraged by extending base classes. Classes and interfaces also use the same terminology that has already been established, which makes the translation from theory to implementation much more clear.
As a result of the new training scheme, the quality of predictions can be increased without losing generalizability. Application of the new training scheme could possibly be applied to any code completion systems that trains a predictive model, making it a candidate for improving existing systems as well as in future research.
results comparison between GCC and GeneCSCC (developed using GeneticSharp)
Marcus Ording wrote the paper for his degree project in Computer Engineering at KTH Royal Institute of Technology from Stockholm, Sweden.
You can access the full paper directly on DiVA Portal.
]]>EditorApplication.hierarchyWindowItemOnGUI
. Now I’m improving it allowing you to customize its style.
To change the style, you just need to edit the values of the HierarchyWindowGroupHeaderSettings in the inspector
Remember that you need to save the above .cs inside any Editor
folder
It’s a good practice add the tag EditorOnly
to your group header game objects
EditorApplication.hierarchyWindowItemOnGUI
:
Any GameObject with name that starts with “—” will be considered a group header
You can set tag of the headers game objects as EditorOnly
to avoid them on runtime
Sebastian Lague made an ecosystem simulation with foxes and rabbits in Unity3d (Coding Adventure: Simulating an Ecosystem).
Metal 3 was announced last week at WWDC 2019. Apple presented it along with relevant numbers:
- Metal can now make 100 times more draw calls than OpenGL.
- Metal runs on roughly 1.4 billion devices currently.
- Metal can drive up to 56 TFLOPS of single precision.
Created by the Emmy-winning team that brought you Baymax Dreams, Sherman is a new real time Unity short that delivers the most advanced real time fur ever!
As many of you know, we put UNet into maintenance mode because we believe there is a better way forward with our new connected games stack
How do you find new revenue opportunities in a free-to-play game that’s been going strong for over five years? Moscow-based Pixonic, a Top 100-grossing game developer in the CIS, has driven War Robots’ monthly players’ spending to over $5M.
MidiAnimationTrack
that imports .mid files into Unity Timeline. It allows creating musically synchronized animation (MIDI Animation Track).
Is a custom timeline/playables package that provides functionality to control object properties based on sequence data contained in a standard MIDI file
Sykoo made a new video about his 5 favorite assets from Unity Asset Store (5 AMAZING ASSETS for Unity 2019).
A common technique to extract text from images is know as OCR (Optical character recognition) and the best implementation, that I Know, is called Tesseract.
When a I started to build the tool, I used the most famous Tesseract’s wrapper for .NET.
Although the wrapper worked very well, I was curious about if there was a way to get better peformance results. With a little search I noticed that the .NET wrapper still use Tesseract 3, but there was a version 4 available with a lot of performance improvements:
If you are running Tesseract 4, you can use the “fast” integer models.
Tesseract 4 also uses up to four CPU threads while processing a page, so it will be faster than Tesseract 3
https://github.com/tesseract-ocr/tesseract/wiki/FAQ#can-i-increase-speed-of-ocr
So, I decided to try Tesseract 4 to see how could it impact in the performance of my tool. As at the time there was no .NET wrapper for it, I removed the old wrapper and called Tesseract 4 directly as a process.
The use of Tesseract 4 cut off the time to read the images in almost half
I ended up developing the class below to call the Tesseract 4
command-line (tesseract.exe)
directly from the C# code.
If you try to read a image like this one:
You will get this result after call the TesseractService.GetText
method:
The (quick) [brown] {fox} jumps!
Over the $43,456.78 <lazy> #90 dog
& duck/goose, as 12.5% of E-mail
from aspammer@website.com is spam.
Der ,.schnelle" braune Fuchs springt
iiber den faulen Hund. Le renard brun
«rapide» saute par-dessus le chien
paresseux. La volpe marrone rapida
salta sopra il cane pigro. El zorro
marron rapido salta sobre el perro
perezoso. A raposa marrom rapida
salta sobre o céo preguicoso.
"CONTEXT/<component>/<menu name>"
on a MenuItem attribute:
Normally, out of the box, a Unity project will attempt to run your project as fast as possible. Frames will be rendered as quickly as they can while generally being limited by your display device’s refresh rate.
We’ve been hard at work changing most of the underlying technology powering Project Tiny in response to your feedback, and in order to bring it closer to the Unity ecosystem. This preview is fully integrated with Unity’s Data-Oriented Tech Stack (DOTS) and sets up a foundation for bringing advanced features for both tiny and big use cases.
Claudia L. posted about the Unity Asset Store partnership with Kochava to bring the Kochava SDK for free (Bringing the power of marketing data to indie devs).
Profile early and often as a DOTS implementation develops.
For 2019.2 we optimized and extended the native cache and exposed it as a public UnityEditor.TypeCache API. It can extract information very quickly, allowing iteration over the smaller number of types we are interested in (10–100). That significantly reduces the time required to fetch types by Editor tooling.
VFX Master Shader
(My take on shaders: VFX Master Shader (Part I)).
While I experimented with different VFX concepts, I noticed that I was using a bunch of common shader techniques to create new shaders again and again, and I thought to myself: “Why not have one shader to rule them all?”
Sykoo made a new video tutorial about the Terrain Tools Package preview in Unity 2019.1 (NEW TERRAIN TOOLS in Unity! (Tutorial)).
Unity published a video tutorial to show how to create a toon outiline effect using the new Scriptable Render pass feature (How to Make a Toon Outline Effect in Unity 2019 LWRP! (Tutorial).
Unity has been working closely with Apple throughout the development of ARKit 3, and we are excited to bring these new features to Unity developers.
Kristin Stock made a video showing how to create a simple build system with a circular UI (Building System and Circular UI in Unity.
Marker
(Timeline marker and everything leading up to it).
New in Unity 2019.1, you can now create a duration-less object on the timeline, the marker.
Buried Memories Vollume 2: Sereckh
(Introducing Serekh: New asset pack, Buried Memories Volume 2).
The Unity Icon Collective strives to democratize high-end asset production. With Buried Memories, starting with Volume 1: Yggdrasil, we hope to inspire creators to extend their own universe but especially to guide aspiring artists and others to observe, analyze and learn from the techniques of industry veterans.
$# Introduction
There are many classic easing functions, such as: InQuad, OutElastic, InOutSin, and InOutBounce. All of these are functions that have a time
input parameter and a return value between 0 and 1.
For example, the easing function OutCirc
is coded like:
public float Calculate(float time)
{
return (float)Mathf.Sqrt((2 - time) * time);
}
In the Easing2Curve
editor window you can see 31 easing function availables:
In addition, you can add new easing functions just implementing the IEasing
interface.
To setup the Easing2Curve
just download this gist and unzip it on an Editor
folder inside your Unity project.
You can open the window through the context menu on any AnimationCurve
property on Inspector
.
In the video below you can see how to use the Easing2Curve
:
Several famous books and game designers cite how the prototyping phase of a new game project is an important time to lay out the foundations that will guide the whole game structure during its development project and, as well, to experiment with concepts and ideas of quick and unassuming way.
The prototyping phase, although important, often takes a small percentage in the development time of a game project or in some cases is not part of the project itself, but rather a point prior to that, where we are seeking the idea or concept that will give birth to the new project.
I always find it interesting to compare when an idea is born, or has its first steps, with its final result, because when we are at the beginning of any project or journey, the path may seem hazy and maybe it will become clear only when we manage to transpose the fog which separates the persevering from those who prefer to fear it.
For this reason I present the screenshots to compare Ships N ‘Battles in its prototype phase, which, incidentally, at that stage did not have a name, only its project name “iBattleships”;) , with the final result of its HD version for iOS devices with armv7 and retina display, that is, after letting the fog of this project back.
So we have below the screenshots of the main menu, deploy ships, aim and view the player’s ships, for the first functional prototype version running on Windows and the HD edition version running on an iPad 2:
There are only two mistakes one can make along the road to truth; not going all the way, and not starting. (Buddha)
This is an old post that I made years ago on Skahal’s blog, as the subject still relevant, I reposted it here.
For this first post the chosen one is the AeroVision - Aircraft Trajectories Optimization and Visualization
.
The main motivation of the authors about the AeroVision was: build program that represents an innovative and efficient way to minimize and visualize aircraft noise along simulated and real flight routes. There are no existing programs that offer both optimization and visualization of aircraft noise.
To optimize aircraft trajectories using a genetic algorithm, AeroVision uses the GeneticSharp library, which is a fast, extensible, multi-platform and multithreading C# Genetic Algorithm library that simplifies the development of applications using Genetic Algorithms.
The library supports a number of selection methods, of which the most basic option (elite selection) is recommended. The mutation method used by AeroVision is crossover, which is a commonly used setting. The stop criteria can be a fixed number of generations, which can be specified by the user.
Additionally, the user is free to define the number of chromosomes that are part of one population. The default population size is 70 chromosomes, each representing a possible trajectory in our case.
For trajectory optimization we were able to achieve a performance improvement of 84%
by using a multi-core and multi-threaded genetic algorithm based on crossover operations. This led to a major improvement on the workflow management and automation
Elvan Kula and Hans Schouten wrote the paper as requirements for their degree of Bachelor of Science in Computer Science. The Dr. Ir. N. Dintzner (TU Delft, supervisor), Dr. ir. S. Hartjes (Client) and Dr. ir. M. Larson (Bachelor Project Coordinator) are the thesis committee.
You can access the full paper directly on TUDelft library repository.
]]>"How long does it take to create a flashlight effect using the Sprite Mask?"
.
The answer is: less than 2 minutes
.
You can check this by watching the video tutorial that I made showing how to add the a flashlight effect to the Unity 2D Roguelike sample project:
Nick Davis wrote a post about how Unity recently teamed up with Autodesk and Texel Logic to create a mixed-reality experience to illustrate and understand the complex nature of airflow passing over and around an IndyCar (Simulating high-speed IndyCars in AR).
Sykoo, online evangelist for Unity, published a video showing his techniques during the level design process (How I made a Village in Unity in 1 Hour!).
Joyce[MinionsArt] released another shader tutorial on Twitter (Using particle lifetime to create a fire system).
Motion Matching for Unity
on Asset Store (Motion Matching for Unity (MxM) - Trailer).
can produce highly fluid and responsive animations with the need for state machines or complex logic.
This package contains over 15 new sculpting tools, as well as a utility toolbox to streamline terrain workflow.
Now the HD Render Pipeline and Post Processing are quite solid, and we were able to put much more effort into building on top of that foundation. Among other things, we put some effort into human faces and vfx-heavy characters.
The evangelist Keijiro Takahashi released a GitHub repository with an example that shows how to connect RealSense depth camera to Unity VFX Graph (Rsvfx).
We expect it to be fully supported by Unity 2019.3.
Anis Benyoub wrote about Leveraging Ray Tracing Hardware Acceleration In Unity.
Elena Nizhnik made a post listing the most outstanding #madewithunity projects from last days (Community Component – Soft body physics, shaders that deform space, Norman’s Island and our Steam wishlist).
There are many ways to structure folders in a Unity project, but most common ones are cited by the Unity Learn tutorial: Large Project Organisation:
To keep the project easy to navigate, avoid placing files in the root Assets folder. Use subfolders. How you organize those subfolders is generally decided by your projects but the two main ways to do it are:
A folder for each type of asset and subfolders in them per objects, zones (For example Assets/Materials, Assets/Prefabs, with subfolders Assets/Material/Level1 or Assets/Prefabs/Enemies)
A folder per objects or zones (Such as Assets/Level1/Enemies/Archer, Assets/Shared/UI, Assets/Forest/Trees) with all assets related to those in the folders (Assets/Forest/Trees/BigTree.fbx, Assets/Forest/Trees/Tree.mat, Assets/Forest/Trees/Tree_Bark.jpg).
With big teams or even with the small ones is easy that some assets will be placed in wrong folders. To help to identify and keep those projects folders organized I coded a couple of editor scripts that I called Folder organizer
.
Another advantage of keeping assets organized in their proper folders is that you can apply defaults presets to assets by folder.
To setup the Folder organizer
just download this gist and unzip it on an Editor
folder inside your Unity project.
You can open the window through the menu Window / Folder organizer
.
Just configure the folders to ignore, if should run the validation every time an asset is imported and defined a regular expression to find the assets and what is the expected folder.
In the video below you can see in more details how to use the Folder organizer
to validate if your assets are in the right folders:
Create animation clips from sprites is quite a simple operation to perform on Unity:
But in many cases, you need to repeat this operation over and over again to create other animations to the same character, like clips for idle, walk up, walk down and walk horizontal animations.
When you need to this for only one character, there is no problem, but imagine you need to create the same 4 animation clip for dozens (maybe hundreds) of characters. Besides being a boring operation, you probably will make some mistake and creating some wrong clip. In cases like that, could be a good option to create your own EditorWindow to automate those operations.
To start, let’s see what Unity Manual says about EditorWindow:
You can create any number of custom windows in your app. These behave just like the Inspector, Scene or any other built-in ones. This is a great way to add a user interface to a sub-system for your game.
Making a custom Editor Window involves the following simple steps:
- Create a script that derives from EditorWindow.
- Use code to trigger the window to display itself.
- Implement the GUI code for your tool.
During this tutorial we will create a class called DefaultAnimationsEditorWindow that derives from EditorWindow, we will add a menu item to Unity Editor to allow call our EditorWindow and we will implement the GUI for the window.
Furthermore, we will implement the code that allows us to create Animation Clips and Animation Override Controller from sprites.
The idea behind DefaultAnimationsEditorWindow is that when you have a bunch o characters using the same kind of animations, they used the same structure of sprite sheets, so you can create the animations based on the sprites indexes on sprite sheet texture.
similar sprites sheets from Phantasy Star IV (Alys, Chaz and Demi). Sprites ripped by Ultimecia from The Spriters Resource
Starting with sprite to walk down and ending with the last sprite of walk horizontal, we have 9 sprites and in all sprite sheets the sprites indexes are the same:
All the source code and assets for this tutorial are available on this GitHub repository: https://github.com/giacomelli/coding-an-editorwindow-to-create-default-animations-from-sprites. To start the tutorial you need to fork, clone or download the repository.
git clone https://github.com/giacomelli/coding-an-editorwindow-to-create-default-animations-from-sprites.git
Open the folder default-animations-editor-window-starter
on Unity.
This starter project has an initial setup and assets to allow us to focus on the learning about how to code the EditorWindow.
Open the scene _Tutorial/Scenes/TutorialScene
.
Hit the Play
button. You should see a screen like this:
We will organize our DefaultAnimationsEditorWindow in 3 main classes:
DefaultAnimationsEditorWindow: where is the menu and GUI for our editor window.
DefaultAnimationsSettings: this is our ScriptableObject to save our settings defined on DefaultAnimationsEditorWindow.
DefaultAnimationsUtility: here we will implement the code used by our DefaultAnimationsEditorWindow to perform an operation. Put the operation code in a separated class not bounded by the GUI allow us to use these operations in any other script. This way to work is similar to what Unity itself use in some editor operations, like: AnimationUtility, PrefabUtility and SpriteUtility.
Besides the above classes, there are some other extension method classes on the folder Extensions that are self-explanatory and I won’t talk in details, but you can check them and read their code documentation to a better understanding.
In the code bellow, we define the menu for the window through the method ShowWindow
and the attribute MenuItem
.
Default Animations menu item
At the OnEnable
we read the settings from our ScriptableObject DefaultAnimationsSettings.
The next 3 methods just draw the components to the editor window GUI.
editor window
There are few things to talk about this class because it is just an ordinary ScriptableObject with a couple of properties that will be serialized and used by the DefaultAnimationsUtility class, a singleton to make easier to access the settings, two methods to load/create the asset and a subclass to sprite mappings.
The most important code of this class is the property SpriteIndexes
that figure out what are the sprite indexes from the sprite sheet that ClipToOverride is using.
SpriteIndexes will be used by DefaultAnimationsUtility.CreateAnimationClips to know what sprites should be used to create the new animations based on the DefaultAnimationsSettings.AnimationsMapping.
In this class is where the heart of our editor window resides.
There are 2 important methods here: CreateAnimationClips and CreateAnimatorOverride.
It iterates through the AnimationsMappings
defined in the editor window (saved on DefaultAnimationsSettings.AnimationsMapping), for each mapping it will call the method CreateAnimationClip
.
The CreateAnimationClip
creates a new AnimationClip
(or load if already exists one with the same name), copying the frame rate and wrap mode from the ClipToOverride
defined in the mapping. After, if wrap mode is a loop, it uses the AnimationUtility
to set the loop time to the clip settings (through the extension methods).
Now is the most tricky part of this class, we need to create an EditorCurveBinding
for the sprite and get the ObjectReferenceKeyframe
from the ClipToOverride
and create new ObjectReferenceKeyframe
to our new keyframes (AnimationClipExtensions
).
In this method we create AnimatorOverrideController
that will override each clip from the DefaultAnimationsSettings.AnimatorController
by the ClipToOverride of each mapping defined on DefaultAnimationsSettings.AnimationsMapping
(AnimatorOverrideControllerExtensions
).
Copy this whole Editor folder to your Assets/_Tutorial
folder.
this folder is available on your local clone of the repository inside the folder default-animations-editor-window-complete/Assets/_Tutorial
Delete the DefaultAnimationsSettings scriptable object instance.
Open the editor window on menu Windows / Default Animations
and configure it as showing in the video below:
Drag all the sprite sheets (textures) from folder Assets/_Tutorial/Sprites
to the Spritesheets
field and click on Create animations
button:
Try to change de editor window to allow more the one set of default animations. One way to this is by creating another ScriptableObject to save the current selected DefaultAnimationsSettings
asset.
If you have any doubt how to implement any part of this challenge, feel free to ask on this post comments or send me a message.
The tutorial repository has two main folders:
In this tutorial, we learned how to coding a custom editor window to create a set of default animation clips and animator override controller.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
In last days I’ve created a Azure Pipelines that publish WebJobs, but in our specific case our WebJobs were not being published to /site/wwwroot/App_Data/jobs/Continuous/<webjobs name>
, because “reasons” our root web app is using ‘/site/www’ as folder. So, you should expect that the new folder to deploy WebJobs is /site/www/App_Data/jobs/Continuous/<webjobs name>
, but NO
, the new folder is /site/jobs/Continuous/<webjobs name>
.
The big problem with the above behaviour is that WebDeploy
via VS or AzureRmWebAppDeployment
task via Azure Pipeline will still try to deploy the webjobs to the folder inside /site/www/
My best suspects are these lines on AzureRmWebAppDeployment@3
The AzureRmWebAppDeployment
just check the physical path configured on Azure when we defined a VirtualApplicatoin
parameter, but in our case it is not a virtual application, but it is a different folder than /site/wwwroot/
.
in Kudu wiki there is some explanation about the webjobs folders.
I used the FtpUpload task to publish the WebJobs binaries to the /site/jobs/Continuous/<webjobs name>
and two instances of AzureAppServiceManage
task to stop and start the jobs:
you’ll need to replace the values between <..> in the variables
section to specific values of your project
I omitted some tasks from the original azure-pipelines.yml for simplicity. The original has a lot of others taks, as unit tests and web api publishing
Now that you’ve automate your WebJobs publishing, is a good idea improve your jobs to check if Azure has requested a shutdown. You can do this taks through the CancellationToken parameter passed to the job.
A function can accept a CancellationToken parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn’t terminate unexpectedly in a way that leaves data in an inconsistent state.
In the video above you can see the Sorting Layer Debugger been used in the Unity 2D Platformer sample project
Just download the gist bellow to your Unity3D project and add it inside a Editor
folder.
You can open the debugger window through the menu Window / Sorting Layer Debugger
.
The debugger can be enabled in the edit or the play mode and it will list the Sorting Layers and the number of game objects using each layer. You can hide/show the game objects of each layer by clicking on its check box.
The name of Sorting Layer been used is showing in the top of each game object in the scene view.
Scene view with Sorting Layer Debugger enabled
Kristin Stock posted a video about how she is procedurally generating cities using Subdivsion in Unity.
Set up procedural motion on animated skeletons at runtime. You can use a set of predefined animation constraints to manually build a control rig hierarchy for a character or develop your own custom constraints in C#.
Alexander Ameye launched a site with many Unity tutorials, like Water 1: Tessellated Plane
and Toon Shading
.
Inception effect
.
Have you ever wondered how your brain would react if you bend the space around you in VR? Or change the field of view and do a vertigo? Well I did, so I started writing a series of shaders which deformed the space around me, using matrices and ended up with an Inception looking effect.
Have you ever needed to compare the difference in performance between two versions of your project? See the impact of an asset or code change, optimization work, settings change or Unity version upgrade?
Unity 2019.2 beta released an experimental 2D Renderer in LWRP with 2D lights, Lit and Unlit Sprite Masternode in Shader Graph, and Pixel Perfect Camera.
We’ve built Signals to establish a communication channel between Timeline and outside systems. But what does that mean? Why did we decide on this approach?
Imagine we imported an animated 3D model from Asset Store, in our sample, we will use the incredible robot from the free package Sci Fi Warrior PBR HP by Dungeon Mason. In this package, there are 10 animation clips:
animation clips available on Sci Fi Warrior PBR HP
All those clips are animations for full body. So, when you play them:
Idle_GunMiddle
the robot stands idle with the gun in the middle of his body
WalkForward_Shoot
the robot walk forward and shoot
You can see all the animations available on the Sci Fi Warrior PBR HP package in this video https://www.youtube.com/watch?v=fNzBdYhm3Gk
These are great animations, although if we want to make the robot stands idle, but instead of keeping the gun in the middle of his body, he aims the gun or shoot?
Or if we want that robot walk forward but holding the gun in the middle of his body?
Of course, we can ask to the artist to create all those animation combinations, but there is a smarter approach for this case…
To start, lets see what Unity Manual says about Avatar Masks and Animation Layers:
Masking allows you to discard some of the animation data within a clip, allowing the clip to animate only parts of the object or character rather than the entire thing. For example, if you had a character with a throwing animation. If you wanted to be able to use the throwing animation in conjunction with various other body movements such as running, crouching and jumping, you could create a mask for the throwing animation limiting it to just the right arm, upper body and head. This portion of the animation can then be played in a layer over the top of the base running or jumping animations.
Unity uses Animation Layers for managing complex state machines for different body parts. An example of this is if you have a lower-body layer for walking-jumping, and an upper-body layer for throwing objects / shooting.
By reading those two sections is quite clear that we need to use avatar masks and animation layers together to combine our animations of different body parts. So, in the next sections, I will demonstrate how to combine three animations (Idle_GunMiddle, WalkForward_Shoot, and Shoot_single) to make the robot shoot while stands idle and walk forward with a gun in middle body. After that, I will propose you a challenge for the other animations available.
All the source code and assets for this tutorial are available on this GitHub repository: https://github.com/giacomelli/unity-avatar-mask-and-animation-layers. To start this tutorial you need to fork, clone or download the repository.
git clone https://github.com/giacomelli/unity-avatar-mask-and-animation-layers.git
Open the folder avatar-mask-starter
on Unity.
This starter project has an initial setup and assets to allow us to focus in the learning about Avatar Masks and Animations Layers.
Open the scene _Tutorial/Scenes/TutorialScene
.
If you see a popup called TMP importer
, hit the Import TPM Essentials
button to import the TextMesh Pro’s assets.
Hit the Play
button. You should see a screen like this:
Starter project running: just HUD
Create a new Animation Controller
(menu Assets/Create/Animation Controller
) and open it:
Animator window showing the animation controller created
All animation layers, nodes and transitions for this tutorial will be created inside this animation controller.
In the hierarchy, select the SciFiWarriorHP and in the Animator component set the Controller
property to our Animation Controller.
To get something running right now, we will make the robot walk forward when the button WALK / FORWARD
became checked.
In the Animator opening with our Animation Controller drag the animation clip Idle_GunMiddle
from folder SciFiWarrior/Animations
. Do the same with the clip WalkForward_Shoot
.
Create a transition (right click on the node and Make transition
) from Any state
to WalkForward_Shoot
and create another transition from WalkForward_Shoot
to Idle_GunMiddle
.
Now we need to tell to the animation controller when it should activate the two transitions, for this we will create a bool animation parameter called WalkForward
.
WalkForward parameter created
We want to activate the transition from Any state
to WalkForward_Shoot
when WalkForward
is true, so select this transition (click on the arrow connecting the state Any State
to WalkForward_Shoot
) and in the Conditions
list on inspector add the WalkForward
equals true
:
WalkForward condition defined
Click the reset
to auto fit the exit and transition time:
reseting the transition settings
Repeat the same steps as above for the transition from WalkForward_Shoot
to Idle_GunMiddle
, but use false
as the value for WalkForward
.
animation base layer
Hit the Play
button to test the animation states. When you click on WALK / FORWARD
the robot starts to walk, when you click it again the robot stops to walk.
The animation for walk forward works pretty well, the robot walks forward while it’s aiming the gun. This is the whole movement animated inside the WalkForward_Shoot
clip.
Now we want that robot walk forward without aiming the gun, instead of this, we want it keeps the gun in the middle as it does when Idle_GunMiddle
is active. To get this done, first, we need to create an avatar mask to the upper body.
Create a new Avatar Mask (Assets/Create/Avatar Mask
) called UpperBody
. In the inspector, uncheck all lower body parties, as the image below:
only upper body
With the UpperBody Avatar Mask created, we will need to create a new animation layer that will use the mask. Go to the animator window and create a new layer using the + button
:
the new layer created with the mask
Change the Weight
property to 1 and set the Mask to the UpperBody Avatar Mask. Keep the Blending
as override.
Drag the animation clip Idle_GunMiddle
from folder SciFiWarrior/Animations
to the UpperBody
layer.
upper body animation layer
Hit the Play
button to test the animation states. When you click on WALK / FORWARD
the robot starts to walk, but now it’s not aiming the gun, instead it keeps the gun in the middle.
So, this is done by the UpperBody Animation Layer that we added to our animation controller, as it’s mask was defined with our UpperBody Avatar Mask that only consider upper body parts of the humanoid, Unity overrides the base layer animation with the upper body part of the Idle_GunMiddle
.
We will make the robot shoot when the button SHOOT / SINGLE
became checked, this will help us to better understand how the Avatar Mask and Animation Layer works to override the animations from the base layer with the animations from UpperBody layer.
In the Animator Window with our Animation Controller opened, select the UpperBody layer and drag the animation clip Shoot_single
from folder SciFiWarrior/Animations
.
Create a transition (right click on node and Make transition
) from Any state
to Shoot_single
and create another transition from Shoot_single
to Idle_GunMiddle
.
Now we need to tell to the animation controller when it should activate the two transitions, for this we will create a bool animation parameter called ShootSingle
.
ShootSingle parameter created
We want to activate the transition from Any state
to Shoot_single
when ShootSingle
is true, so select this transition (click on the arrow connecting the state Any State
to Shoot_single
) and in the Conditions
list on inspector add the ShootSingle
equals true
.
Click the reset
to auto fit the exit and transition time.
Repeat the same steps as above for the transition from ShootSingle
to Idle_GunMiddle
, but use false
as the value for ShootSingle
.
Hit the Play
button to test the animation states. When you click on SHOOT / SINGLE
the robot shoot.
Now you can combine the two buttons WALK / FORWARD
and SHOOT / SINGLE
and see how the animations combine.
Do something is better to learn than just reading and following someone instructions because this I will challenge you to try to implement other actions:
the remaining actions to the challenge
To implement these actions you need to do almost the same thing we did in this tutorial so far, just using another animation clips, transitions and parameters.
The challenger items are the legs because you will need to create new Avatar Masks with the Transform
option of the avatar configuration to get the right animation. New Animation Layers will be needed as well.
If you have any doubt how to implement any part of this challenge, feel free to ask on this post comments or send me a message.
The whole source code and assets for this tutorial are available on: https://github.com/giacomelli/unity-avatar-mask-and-animation-layers
This repository has two main folders:
The video below shows the complete solution running:
In this tutorial, we learn how to use Avatar Masks and Animation Layers to animate a robot with different masks and layers. These techniques allowing us to use already existent animations and combine them.
Icons made by Freepik, Vignesh Oviyan and Eucalyp from www.flaticon.com is licensed by Creative Commons BY 3.0
Before you can use the dotnet new
command to create the GeneticSharp projects from templates, you need to install it on your machine:
dotnet new -i GeneticSharp.Templates
After this, if you run the command:
dotnet new GeneticSharp --list
The GeneticSharp templates will be listed:
Create a new console application template with GeneticSharp where you just need to implement the chromosome and fitness function.
dotnet new GeneticSharpConsoleApp -n MyNamespace -o MyOutoputFolder
Create a new console application template with GeneticSharp ready to run a Travelling Salesman Problem (TSP).
dotnet new GeneticSharpTspConsoleApp -n MyNamespace -o MyOutoputFolder
Create an Unity3D template with GeneticSharp ready to run a Travelling Salesman Problem (TSP).
dotnet new GeneticSharpTspUnity3d -n MyNamespace -o MyOutoputFolder
Let’s evolve!
]]>The additions of this version are the two new crossovers implementations and a new option of ITaskExecutor that use TPL.
The alternating position crossover operator (Larrañaga et al. 1996a) simply creates an offspring by selecting alternately the next element of the first parent and the next element of the second parent, omitting the elements already present in the offspring.
It can be seen as a P-sexual crossover operator, where p (parents number) is a natural number greater than, or equal to, 2.
It starts by defining a threshold, which is a natural number smaller than, or equal to p.
Next, for every; i E {l, 2, . . .N} the set of ith elements of all the parents is considered. If in this set an element occurs at least the threshold number of times, it is copied into the offspring.
Three new classes were implemented to run some key points of genetic algorithm using TPL.
Those new classes can be used alone, but normally you will use all them together. You can see a sample usage at unit test Start_TplManyGenerations_Optimization
.
An ITaskExecutor’s implementation that executes the tasks in a parallel fashion using Task Parallel Library (TPL).
Represents a population of candidate solutions (chromosomes) using TPL to create them.
A new interface called IOperatorsStrategy was added to GeneticAlgorithm as an option. Two options of operators strategy were created, the default one, called DefaultOperatorsStrategy and the new one called TplOperatosStrategy.
I would like to thanks to EMostafaAli and Alexey I. for opened some issues and made small pull requests and Dan for contributing with the TPL implementations.
Let’s evolve!
]]>Em 2005, junto com mais 3 amigos, foi fundado o site jogosdaqui, que era um site especializado em falar sobre os games desenvolvidos por empresas brasileiras.
Esse site produziu muitos artigos, catalogando diversos games, desde o Aeroporto 83, considerado o primeiro game brazuca.
Essa produção de artigos se manteve bastante ativa entre entre 2005 e 2007, mas em 2008 “fechamos as portas”, pois não conseguíamos mais atualizá-lo como merecia.
Em 2010, no meio da produção do game Ships N’ Battles da minha gamedev indie Skahal Studios, criei uma conta de Twitter para o jogosdaqui, para pelo menos conseguir ajudar um pouco na divulgação dos games nacionais.
A conta ganhou alguma notoriedade quando fiz uma pergunta sobre o posicionamento dos candidatos a presidente em relação a indústria brasileira de jogos e o José Serra respondeu:
Tempos depois fui convidado pelo Théo Azevedo do UOL Jogos para criar um blog do jogosdaqui no UOL.
Esse blog durou entre o final de 2010 e o final de 2014.
Durante esse tempo também produzi algumas entrevistas para a revista EGW, com a mesma finalidade do jogosdaqui, falar sobre gamedevs brasileiras:
No final de 2014, comecei a migrar o jogosdaqui para um site próprio em WordPress e durante todo ano de 2015 foram produzidos diversas matérias, mas devido a demandas pessoais o site ficou no ar apenas até 2016.
E essa história nos leva a esse momento, pois acredito que o material que o jogosdaqui produziu sobre os games nacionais, alguns deles que podem ser encontrados somente no jogosdaqui, não devem ser esquecidos ou perdidos.
Então no início desse mês comecei a conversão de todos os artigos, oriundos de 3 modelos diferentes (site PHP, blog UOL e WordPress) para uma única plataforma open source, baseada no GitHub Pages (Jekyll) e que permitirá que esse material não seja perdido, possa ainda ser melhorado e novos posts possam ser publicados por qualquer pessoa interessada em divulgar a indústria brasileira de desenvolvimento de jogos.
Acessem o https://jogosdaqui.github.io e aproveitem para conhecer mais sobre jogos os eletrônicos brasileiros.
The additions of this version are the new whole sample and extensions showing how to use GeneticSharp to solve a Sudoku.
The GeneticSharp.Extensions project receive those new features:
Compound chromosome to artificially increase genetics diversity by evolving a list of chromosomes instead of just one. Sub-genes are inlined into a single compound list of genes.
Fitness class that can evaluate a compound chromosome by summing over the evaluation of its sub-chromosomes.
Represents each type of chromosome for solving a sudoku is simply required to output a list of candidate sudokus.
A class that represents a Sudoku, fully or partially completed. Holds a list of 81 int for cells, with 0 for empty cells. Can parse strings and files from most common formats and displays the sudoku in an easy to read format.
This simple chromosome simply represents each cell by a gene with a value between 1 and 9, accounting for the target mask if given.
Evaluates a sudoku chromosome for completion by counting duplicates in rows, columns, boxes, and differences from the target mask.
This more elaborated chromosome manipulates rows instead of cells, and each of its 9 gene holds an integer for the index of the row’s permutation amongst all that respect the target mask. Permutations are computed once when a new Sudoku is encountered, and stored in a static dictionary for further reference.
This chromosome aims at increasing genetic diversity of SudokuPermutationsChromosome, which exhibits only 9 permutation genes. Here, instead, an arbitrary number of Sudokus are generated where for each row, a random gene is picked amongst an arbitrary number of corresponding permutation genes.
GTK# sample
I would like to thanks to Jean-Sylvain Boige (@jsboige) for contributing with those great new samples and extensions and for use GeneticSharp in his Artificial Intelligence course in French engineering schools (course).
Take a look on the pull-request for more details about those new features: New Sudoku extension and GTK# sample #43.
Let’s evolve!
]]>As it is now whenever you create an instance of FloatingPointChromosome, it will randomly create gene values. I have a case where I need to stop optimization at some time, save results to DB and resume it later. For this, I need to be able to give gene values to FloatingPointChromosome.
I just discovered that when running the optimizer within a Task/Tread/TPL Dataflow block with TaskExecutor set to ParallelTaskExecutor when instantiating GeneticAlgorithm, it blocks all other outside operations during the lifetime of the optimizer run. This does not happen when not setting the TaskExecutor option.
I would like to thanks to @MattWolf74 and @mersadk for contributing to the open issues and pull-request
If you want to use this new version on your project, just get the 2.1.0 version from NuGet:
update-package GeneticSharp
Let’s evolve!
]]>In this app you can see GeneticSharp running on Unity3d in three different samples:
Based on famous BoxCar2D, this sample uses a genetic algorithm to create car designs to overcome road challenges, like gaps, hills, and obstacles.
The classic TSP sample, but in this, we can change the cities positions while the genetic algorithm is running and see how it finds the best route.
This sample inspired this post TSP with GeneticSharp and Unity3D.
This sample tries to build a higher wall using random initial bricks positions.
You can get the full source code here: https://github.com/giacomelli/GeneticSharp/tree/master/src/GeneticSharp.Runner.UnityApp
Let’s evolve!
]]>I was always amazed by the 2D cars designed by BoxCar2D and see how genetic algorithm make new and (probably) better cars each new generation and I always wanted make a sample inspired by it using GeneticSharp.
The sample that I will talk about is available on the GeneticSharp repository at GeneticSharp.Runner.UnityApp. You can fork GeneticSharp and open it on Unity3D editor, then run the MenuScene.
If you need some introduction to genetic algorithms or GeneticSharp:
- Introduction to genetic algorithms: Function optimization with GeneticSharp.
- Using GeneticSharp on Unity3D: TSP with GeneticSharp and Unity3D.
In GeneticSharp Car2D, a car is composed of:
The vectors and wheels have mass, so bigger ones will made a slower car.
To represent the phenotype described above the car chromosome will be:
This structure will take 27 bits, then we will repeat it 8 times, that is the number of the car vectors. This give us a chromosome with 216 bits.
The bit string chromosome will look like this:
001101110101110100001000010010100100111000100000010101010100000111100100001110010010010101010111100000000011011100000001110000001110010000110101011111100000100011011101101011101000001110000011011000110111000000000011
The roads and cars can be configured using the scriptable object CarSampleConfig. The GeneticSharp Car2D allows create any number of different roads and car configs, but for this post, I will show 3 roads configurations with the same car configuration mentioned above.
If you want to add new roads, just create a new CarSampleConfig in “Assets / Create / GeneticSharp / Car / CarSampleConfig”. The new road will automatically appear on Car2D menu when you run the sample.
A road is defined by its points quantity and distance, height, rotation, gaps, and obstacles. The gravity and everything about physics is automatically simulated by Unity3d itself.
The fitness of each car (chromosome) to the road will be the sum of the max distance reach by it plus the average velocity at that moment.
The genetic algorithm to evaluate the simulation is configured with this operators:
Besides that, the chromosomes are evaluated in parallel using the GeneticSharp’s ParallelTaskExecutor and you can see all them in 4x4 grid:
If you are running the sample inside Unity3D editor you can change the size of the simulation grid in the SampleController game object from CarScene.
Now that we have our Car2D and roads defining, we can put the genetic algorithm to run and see how it design the cars for each road.
Besides the roads that are different, the car setup is the same. This way we can compare how genetic algorithm will reach different car designs to different challenges (roads).
The first is a road with gaps that getting greater. The next video resumes how GeneticSharp designs the car for the gap road.
All videos are recording running the GeneticSharp Car2D on my Android. I used the DU Recorder to record them. Amazing app BTW.
About 100 generations, GeneticSharp generate a car that has a front wheel suspend in the air that allows it to reach the other side of the gap without fall on it.
This road has some hills that increase as the car travels.
For hill road, the genetic algorithm creates a car with enough length to touch the two sides of the downhills at the same time, this allows the car to keep the velocity needed to overcome the hill part. The middle wheel is what allows the car to pass the hill part of the road.
Some fixed obstacles are placed on the road and force the cars to pass over them
This is probably the most curious design because the genetic algorithm found out that to overcome the obstacle, the car needed to reach it at high velocity and needs to have a back support to avoid car rollover.
A think two things are quite clear after I build this sample:
All the challenges in the roads are incremental, the gaps start small and get bigger by the extension of the road. The same was did to the hills and the obstacle.
Why do the challenges need to be incremental?
During first tests with the gap road I realized if I just put an 8 meters gap in front of first-generation cars, the genetic algorithm cannot choose better designs, because most of them just fall in the first gap. The challenge was too much for the initial generations, but if I just increased the size of the gaps from small size until it gets to 8 meters, the genetic algorithm could choose better and better designs each generation.
It’s how we learn
The same happens on how we learn many things, like math: first basic operations, then simple equations and in some years we are calculating integrals (or trying at least).
It’s easy to spot this behavior on evolution itself, where the genetic algorithms are based. In the book “Guns, Germs and Steel” Jared Diamond says:
“The near-simultaneous disappearance of so many large species raises an obvious question: what caused it? An obvious possible answer is that they were killed off or else eliminated indirectly by the first arriving humans. Recall that Australian / New Guinean animals had evolved for millions of years in the absence of human hunters. We know that Galapagos and Antarctic birds and mammals, which similarly evolved in the absence of humans and did not see humans until modern times, are still incurably tame today.”
The Australian / New Guinean animals got a challenge that was too much to them in that time, different of the other animals from other continents that evolved together with humans (or proto-humans) and learn to survive this predator.
Different challenges lead to different car designs. It’s quite clear that the best cars design to each of the 3 roads are different between them and only works in its specific road.
Now, it’s up to you, fork GeneticSharp and run the GeneticSharp Car2D on your Unity3D editor. Let me know what roads and results you created.
Let’s evolve!
]]>In this Unity3d project there are 3 samples:
Based on famous BoxCar2D, this sample uses a genetic algorithm to create car designs to overcome road challenges, like gaps, hills, and obstacles.
A post explaining this sample in detail will be published tomorrow.
The classic TSP sample, but in this, we can change the cities positions while the genetic algorithm is running and see how it finds the best route.
This sample inspired this post TSP with GeneticSharp and Unity3D.
This sample tries to build a higher wall using random initial bricks positions.
In the folder _runner/Commons there are the BitStringChromosome class and some phenotype implementations. At the moment they are experimental, but I will wait for the community feedback to see if they can be promoted to GeneticSharp library code.
Let’s evolve!
]]>In this post I will show how to use GeneticSharp and Unity3D to solve the TSP (Travelling salesman problem).
According to Wikipedia “The travelling salesman problem (TSP) asks the following question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?”
TSP is a classic sample to test some optimization techniques, as well it’s fairly used to demonstrate how to implement a genetic algorithm. For these reasons I will use it to show you how to implement a basic genetic algorithm in Unity3D using GeneticSharp.
To better understand this tutorial, you need to have some experiences/knowledges in:
If you need an introduction to genetic algorithms, take a look at this tutorial Function optimization with GeneticSharp.
Using Unity 2018.1+, create a new project called TspSample.
Go to “Player settings” / “Other settings” / “Configuration”, select “.NET 4.x Equivalent” on “Scripting Runtime Version”. Unity will ask to restart, you can confirm.
After restart, go back to “Player settings”, select “.NET Standard 2.0” on “Api Compability Level”.
Install GeneticSharp using the .unitypackage available on GeneticSharp release page.
The chromosome represents a solution of the problem we are trying to solve. In our case the TSP chromosome should represent “the shortest possible route that visits each city and returns to the origin city”.
To represent the cities route each gene of our chromosome will represent an index of a city in the route.
Create a C# script called “TspChromosome.cs”:
The next step is define our genetic algorithm fitness function, but first we need to create a simple class to represent a city on a 2D space.
Create a C# script called “City.cs”:
Now we need to evaluate the TspChromosome.
Our fitness function will evaluate the TspChromosome fitness based on the total distance to reach all cities in the route represented by the chromosome. The shorter the distance, the better the chromosome.
Create a C# script called “TspFitness.cs”:
In this step we need to configure our genetic algorithm using the TspChromosome, TspFitness and some classic GA operators already built in GeneticSharp.
Create a C# script called “GAController.cs”:
Create a GameObject called “GAController” in the scene and add the GAController.cs to it.
Save the scene.
Run the scene on editor and take a look on the console window, you will see the distance to reach all cities getting smaller as the generations ran.
Now our GA is running inside Unity3D, but it need to display the cities route better. We need to create a visual representation to the cities.
We will create a prefab based on a sprite of a pin. You can use an icon as this one from www.flaticon.com.
Download it to inside your Unity3D project.
Maybe you will need to change the ‘Pixels Per Unit’ to 1000 to get a good pin size on screen.
Drag it to the hierarchy panel, rename the new GameObject to CityPrefab and drag it back to your Assets folder on Project panel. Now our CityPrefab is created.
Delete the CityPrefab game object from the current scene.
Add the following field to the GAController.cs
Then, create the method DrawCities:
And then call it from Start method:
Now, select the GAController game object on hierarchy and set the CityPrefab property.
Try to run the scene, you should see something like this:
In the previous step we drawn the cities and we have the visual of the problem: the cities.
Now we need to draw the solution: the route represented by the best chromosome of each generation.
One of the simplest ways to draw some lines in Unity3D is using the LineRenderer component.
Add the following code to the GAController.cs:
Create the method DrawRoute:
Then call it from Update method:
Before run the scene, we need to add a LineRenderer component to our GAController game object.
Change the width property of the LineRenderer from 1 to 0.1.
Run the scene again, now you should see the route been optimizing as the generations are ran:
Our sample could be considered done, but would it be nice if we you could change the cities positions while the genetic algorithm are running and see how it manages these cities positions changes.
Create a C# script called “CityController.cs”: I won’t getting in details about how this is script works, but it’s allow the user to drag the cities’ pin using the mouse or the finger touch if build it to mobile.
Add the CityController.cs to the CityPrefab.
Change the GAController.cs script adding the line below to the end of the for loop of DrawCities method:
Finally, our sample is really done and you should be capable to change the cities positions, by dragging the pins around, and genetic algorithm will try to figure out the best route in real time.
With only 5 C# scripts and 1 prefab we built a pretty nice sample of genetic algorithms using in Unity3D with GeneticSharp. Now you can improve it with your own ideas or use some of mine ;):
The full source code used in this post can be download or fork from this Gist: https://gist.github.com/giacomelli/94721a46d33c6bcb1f3ae11117b7f888
Let’s evolve!
]]>In march GenetichSharp start to support .NET Standard 2.0 in the 2.0.0-rc version. Two months have pass and in the meanwhile the community have time to test that release candidate version and now Unity3D has an official .NET Standard 2.0 profile.
So, it’s time to release the GeneticSharp v2.0.0.
Only GeneticSharp:
install-package GeneticSharp
GeneticSharp and extensions (TSP, AutoConfig, Bitmap equality, Equality equation, Equation solver, Function builder, etc):
install-package GeneticSharp.Extensions
If you’re still in a .NET Framework version lower than 4.6.2 project use the 1.2.0 version.
install-package GeneticSharp -Version 1.2.0
I’ve already talk about these breaking changes on the post about the release candidate version, but I guess is good to point them again:
Let’s evolve!
]]>If you don’t know what is DocsByReflection or what you can do with it, take a look in my previous post about it “Getting your code documentation at runtime”.
In this version DocsByReflection starts to support .NET Standard 2.0.
Now, if you need to use the library on .NET Standard 2.0 or .NET Framework 4.6 projects:
install-package DocsByReflection.
If you are in a .NET Framework project lower than .NET Framework 4.6 you can use the previous version:
install-package DocsByReflection -Version 1.0.12.20
I would like to thanks Erik O’Leary to perform the whole migration of DocsByReflection to .NET Standard 2.0.
]]>In this version GeneticSharp starts to support .NET Standard 2.0 and .NET Framework 4.6.2.
Porting to
Sometime ago I started to porting GeneticSharp to .NET Core and today, after more than 40 hours of work, I finally finish it.
Although GeneticSharp born as a multi-platform library that ran in any OS supported by .NET Framework and Mono, convert it to .NET Core was a desired thing, because .NET Core is where .NET ecosystem is targeting.
This 2.0.0-rc1 was already published to nuget.org, but it’s marked as a pre-realease package. I will kept it as a RC until get sure that there is no issue with the porting and until the Unity3d remove the “experimental” status to its support to .NET Standard 2.0.
If you are in .NET Core or in .NET Framework 4.6.2+ project, please try the the 2.0.0-rc1 version.
Only GeneticSharp:
install-package GeneticSharp -Version 2.0.0-rc1
GeneticSharp and extensions (TSP, AutoConfig, Bitmap equality, Equality equation, Equation solver, Function builder, etc):
install-package GeneticSharp.Extensions -Version 2.0.0-rc1
If you’re still in a .NET Framework version lower than 4.6.2 project use the 1.2.0 version.
install-package GeneticSharp -Version 1.2.0
To start the porting I followed the good pratices describe on links below:
Based on those reads, I decided to support the .NET Standard 2.0 (netstandard2.0) and the .NET Framework 4.6.2 (net462). This meaning that GeneticSharp can be used in .NET Standard and .NET Framework projects, with no differences.
Another thing that I considered was the Unity3d support, so based on this post Unity 2018.1 - .NET Standard 2.0 and .NET 4.6 support Unity is already using a experimental support to netstandard2.0 and net462 libraries.
The domain part of the library, the GeneticSharp.Domain library was the easy part to port, if I cleary remember there was no change it all, besides the changes on .csproj.
The extensions from GeneticSharp.Extensions project needed some more work, because the System.Drawing do not exists directly in .NET Standard 2.0. That was resolved using the System.Drawing.Common NuGet package.
The NCalc library used on FunctionBuilderFitness was updated to use the NCalc.NetCore version.
One of the most tricky porting was the GeneticSharp.Infrastructure.Threading, because it used the external library SmartThreadPool and that one was not supporting .NET Core at that time. I decided to implement the parallel task executor using the .NET built-in ThreadPool class, this implementation was done on ParallelTaskExecutor and its use can be tested on ParallelTaskExecutorTest.
GeneticSharp sample app (GeneticSharp.Runner.GtkApp) was built using Gtk# 2, but only version 3 was ported to .NET Core, and there are huge breaking changes between these two Gtk# versions, so for a while I’ll keeping the sample app still running only on .NET Framework/Mono.
I hope in a near future we can built a cross-platform sample app, maybe using Xamarin Forms for iOS, Android, macOS and UWP. This would be great, but perform this task during the to .NET Core supporting is far from the scope right now.
The links below are some readings that I did while looking for some GTK# alternatives:
Unit tests projects were using the Rhino Mocks as the mocking library, but Rhino Mocks was not supporting (maybe never) .NET Core, so it was replaced by the amazing NSubstitute.
As the time I wrote this, there was no unanimity about what is the best cross-platform tool to collect code-coverage in .NET Core. There are some tools, but each one has some pros and cons:
dotnet test --collect:"Code Coverage"
altcover Instrumenting coverage tool for .net/.net core and Mono, emitting NCover or OpenCover format output.
MiniCover Minimalist Code Coverage Tool for .NET Core.
I guess the most promising tool is the coverlet, but I’ll waiting until the end of GeneticSharp 2.0.0 release candidate period to choose one.
Of course that a porting always come with some problems that will you need to figure out by doing some research and solve them, most of time, using some trick and hacks. These one was some of I’ve done:
As GeneticSharp need to support .NET Framework 4.6.2, but the msbuild does not know where to looking for the framework assemblies in macOS and Linux, so I found thi .NET SDK issue comment issue comment:
When compiling .NET SDK 2.0 projects targeting .NET 4.x on Mono using ‘dotnet build’ you have to teach MSBuild where the Mono copy of the reference asssemblies is.
This msbuild file GeneticSharp.dotnet-core.targets was created and referenced in all .csproj.
Sometimes dotnet test gives the error:
Starting test execution, please wait...
Failed to initialize client proxy: could not connect to test process.
Test Run Aborted
Delete the “obj” and “bin” folders from unit test project will “fix” the problem.
Some readings I did during the whole process, maybe can be useful to someone porting a library to .NET Core too.
OutputPath
attribute in Visual Studio 2017 project ( new .csproj file format ) without target framework cluttering the resolved path?Now I’m planning to create a GeneticSharp’s sample using Unity3d’s new beta features, like the new ECS (Entity Component System), C# Job System and the Burst compiler.
Let’s evolve!
]]>The additions of this version are the new sequence mutation operators: Displacement, InsertionMutation and Partial Shuffle (PSM).
Displacement Mutation: a substring is randomly selected from chromosome, is removed, then replaced at a randomly selected position.
Insertion Mutation: a gene is randomly selected from chromosome, is removed, then replaced at a randomly selected position.
Partial Shuffle Mutation (PSM): we take a sequence S limited by two positions i and j randomly chosen. The gene order in this sequence will be shuffled. Sequence will be shuffled until it becomes different than the starting order.
I would like to thanks to Ahmet Can Saner (@cansaner) for contribute with those great new mutations that him developed during his master degree.
If you want to use the new mutations in your project, just get the new GeneticSharp version from Nuget.
Let’s evolve!
]]>I google and didn’t find any project or asset in Asset Store that already did something like this (please, let me know if you know a similar tool) and, well, I prefer code something, than google something, then I coded a little inspector, that I named as ScenePreview:
UPDATED: I created a new Scene Preview solution, now using its own window editor: Scene Preview Window.
Just download the gist bellow to your Unity3D project and add it inside a “Editor” folder.
After this, open it and edit the line below:
// Change this to a folder in your project.
// Maybe the folder where your scenes are located. Remember to create a subfolder called "Resources" inside of it.
const string PreviewFolders = "_scenes";
If you select any scene in the hierarchy you will see a message like this:
“There is no image preview for scene ‘’ at ‘’. Please play the scene on editor and image preview will be captured automatically.”
So, play the scene on editor and the image preview will be taken, when you select the scene file again you see the preview.
That’s it! I hope this inspector can be useful to you too.
]]>I created GeneticSharp a long time ago because three main reasons:
As far as I know GeneticSharp has been used in a lot of different projects, since card games deck optimization, self managing distributed file system, context-sensitive code completion, even in airplanes trajectories optimization. These are pretty cool and exciting topics, but this tutorial is not about this advanced topics. Here I want to take a very simple sample and show how easy and fast you can add genetic algorithms on your project using GeneticSharp. So, let’s evolve!
In our sample we will optimize the input of a mathematical function, this function will be used as our fitness evaluation function. For this sample we will use a function that everyone saw in school time, the famous Euclidean distance or commonly know as distance of two points function:
We will consider our chromosome fitness as the result of this function. The higher the result, the better is the chromosome fitness.
Our chromosome will be the Euclidean distance function arguments, the X1, Y1, X2 and Y2.
The goal of our genetic algortim is find the input values of Euclidean distance function: X1, Y1 and X2, Y2 that result in the greatest distance in a rectangular area.
What? You can think: “this is a stupid goal”, because everyone knows that the longest distance between two points in a rectangular area are their diagonals. Yeah, you are right, but this is one of the best things about genetic algorithm, our GA code does not know anything about this, it just know that some X1, Y1 and X2, Y2 inputs generate a fitness value and greatest fitness are better. The other justification to use this so simple GA objective is that everyone can understand what is happening and we can focus to learn genetic algorithm.
We can easly create a brute force solution with four nested loops that will find the solution of our problem, but as I mencioned before this ‘problem’ is just for the tutorial purpose, because in normal applications genetic algoritm find solutions to problems that the solution is not so obvious or in some cases the solutions are even unknow.
Open your IDE (Visual Studio/Xamarin Studio) and create a new console project.
Install the GeneticSharp package:
install-package GeneticSharp
GeneticSharp implements all the classic components of a genetic algorithm, like gene, chromosome, population, fitness, selection, crossover, mutation, reinsetion and termination. When you use it to build your genetic algorithm code you just need to implement a few things, mostly you will need to code just your solution’s chromosome and fitness.
Chromosome is the representation of a possible solution in genetic algoritms.
In GeneticSharp a chromosome needs to implement the IChromosome interface, but in most of cases you just need to inherit the ChromosomeBase class and override the methods GenerateGene and CreateNew.
Besides IChromosome and ChromosomeBase also exist the classes: BinaryChromosomeBase, FloatingPointChromosome and IntegerChromosome that are chromosome classes that can be directly used when your solution chromosome can be represented as numbers or a string representation of 0 and 1.
In our case we need to create a chromosome that represent the input variables of the Euclidean distance function: X1, Y1 and X2, Y2.
The FloatingPointChromosome is a perfect fit for this kind of representation, because it allows represent more than one number inside of it.
Open the Program.cs file and inside the Main function enter the code below:
In the code above we create two variables to represent our rectangle area size: maxWidth and maxHeight, then we created a new instance of FloatingPointChromome, this will be the template to all chromsome in our solution.
The constructor receive four arrays, they are:
1) The minimum values of numbers inside the chromosome. Our rectangle area will start at 0 points, so the min value of X1, Y1, X2 and Y2 are 0 (zero).
2) The maximum values. We use our previous created variables.
3) The total bits used to represent each number. The maximum value is 998, so 10 bits is what we need.
GeneticSharp will warn you if you try to use a total bits that cannot hold a number inside your floating point chromosome.
4) The number of fraction (scale or decimal) part of the number. In our case we will not use any.
The population represents the possible solutions to our problem, so we need to create a population of our Euclidean distance chromosome.
In GeneticSharp a population is represented by the IPopulation interface, but in most cases you can directly use the Population class.
We created a population that will have a minimum number of 50 chromosomes and a maximum number of 100 and used our chromosome template as the “Adam chromosome” (yeah, you get the reference) of our GA.
The fitness function is where the genetic algoritm will evaluate and give a value (fitness) to each chromosome generated inside it. A good fitness function can guide your GA to a fast and optimum solution.
In GeneticSharp we represent a fitness function through the IFitness interface. Almost always you wil have to code a class that implement this interface, but for our tutorial we can use the simple and lower friction class FuncFitness. This class allow us to build our fitness evalution as its constructor argument.
We receive a IChromosome in the variable “c”, then we cast it to FloatingPointChromosme.
To allow us to evaluate the chromosome we need to convert it from its genotype (FloatingPointChromosome) to its phenotype (x1, y1 and x2, y2), we do this calling the ToFloatingPoints method. This method return an array of numbers using that configuration we used when we created our Euclidean distance chromosome. Now we have our X1, Y1 and X2 and Y2 numbers we just need to pass it to the Euclidean distance function and return the value as the fitness value of the current chromosome.
A selection is the genetic algorithm operator responsible for decing which chromosomes of current population will be selected as parent of the next population.
You can code your our selection through the ISelection interface or extending the SelectionBase class.
Besides this, you can use the already implemented classic selections: Elite, Roulete Wheel, Stochastic Universal Sampling and Tournament.
Elite selection is a good option, because it will select the chromosomes with the best fitness (greatest distance). You can try the others selection options too and see how they change the GA speed and results.
The chromosomes selected by the selection need to cross to generate new possible solutions of the next generation of the GA. The crossover operator is responsible for crossing these selected chromosomes.
There are the ICrossover interface and CrossoverBase class if you want to code your crossover from scratch or you can use one of already available: Cut and Splice, Cycle (CX), One-Point (C1), Order-based (OX2), Ordered (OX1), Partially Mapped (PMX), Position-based (POS), Three parent, Two-Point (C2) and Uniform
Some of those classics crossovers, like OX1 and OX2, cannot be used in our tutorial because they need chromosomes with ordered genes and this is not the case of our chromosomes with 0101001100 genes. Do not worry, because GeneticSharp will warn you if you try to use an invalid chromosome in an ordered crossover
Uniform Crossover enables the parent chromosomes to contribute the gene level rather than the segment level, for example: if the mix probability is 0.5, the offspring has approximately half of the genes from first parent and the other half from second parent.
So, in our case Uniform crossover is a very good option, because using the 0.5f mix probability it will generate new chromosomes that are combination of X1, Y1 from one parent and X2, Y2 from another parent.
The biology definition of mutations is: “In genetics, mutation may be small scale (affecting a gene) or large scale (involving a change in the chromosome). It may arise from faulty deletions, insertions, or exchanges of the genetic material. Such a change may result in the creation of a new character or trait.”
This biological process is one of the process responsible for I am able to write this tutorial and you be able read it. Without mutation, our species might have been found an local optima and perhaps we would never have evolved into what we are now.
The mutation operator has the same purpose in genetic algorithm, it avoid that our GA get stuck in optima local and end up never finding a better solution.
Like the other operators, you can create your own mutation implementing the IMutation interface or extending MutationBase or use some from the GeneticSharp menu: Flip-bit, Reverse Sequence (RSM), Twors and Uniform.
Flip-bit mutation is a mutation specific to chromosomes that implement IBinaryChromosome interface, as our FloatingPointChromosome does. It will randomly chose a gene and flip it bit, so a gene with value 0 will turn to 1 and vice-versa.
A termination decide when a GA should be stopped. GeneticSharp use the generation number termination with just one generation as default termination. This mean that the genetic algoritm will run just one generation when you call the Start method, after this you can increment the expected generation number of the termination and call the method Resume how many times you want.
There are cases where you want to call the Start method in just wait until some condition be reached, this why termination exist.
If you have some special condition to terminate your GA you can implement the ITermination interface or extend the TerminationBase class, but for most of cases you just need to use some of the availables terminations: Generation number, Time evolving, Fitness stagnation, Fitness threshold, And e Or (allows combine others terminations).
In our tutorial we will use the fitness stagnation termination with a expected stagnant generations number of 100, this mean that if our GA generate the same best chromsome fitness in the last 100 generations then it will be terminated.
Now that everything is set up, we just need to instantiate and start our genetic algorithm and watch it run.
So the GA ran, but where is the result? You can always get the best chromosome from the GeneticAlgorithm.BestChromosome property.
Another and better way to monitor the current best chromosome is use the GeneticAlgorithm.GenerationRan event. This event is raised right after a generation finish to run. Using this event you can see in realtime how the genetic algorithm is evolving.
Let’s replace our last “ga.Start();” line to the code below:
Now if your run the program you will see an output like that:
Generation 1: (178,330),(974,228) = 802.508566932466
Generation 2: (950,487),(45,520) = 905.601457596
Generation 3: (935,103),(38,617) = 1033.83025686038
Generation 4: (998,680),(49,65) = 1130.85189127489
Generation 8: (998,680),(57,1) = 1160.39734573981
Generation 9: (998,680),(49,1) = 1166.89416829462
Generation 11: (998,680),(17,19) = 1182.91250733095
Generation 12: (998,680),(25,3) = 1185.35142468384
Generation 14: (998,680),(17,3) = 1191.92701118819
Generation 16: (998,680),(17,1) = 1193.06412233375
Generation 17: (998,680),(16,1) = 1193.88651051932
Generation 23: (998,680),(0,1) = 1207.08119030991
Generation 32: (998,680),(0,0) = 1207.6439872744
If we plot these generations the output image will be like this:
We can see that in the first generations (black lines) the genetic algorithm as some intermediate results and at the end (red line) it found the best possible solution: a diagonal.
When you finish the tutorial your source code will be this one:
In this tutorial we learn how genetic algoritm works and how to use GeneticSharp to solve a very simple sample. Now you can try it to solve more complex problems.
Let’s evolve!
Binary chromosomes can be directly used when your solution chromosome can be represented as numbers or a string representation of 0 and 1.
A new sample has been added to the GeneticSharp.Runner.GtkApp, this sample called “Function optimization” show the FloatingPointChromsome in action:
Besides these new chromosomes, there is a new mutation: FlipBit, a special mutation to IBinaryChromosome that takes the chosen gene and inverts the bits.
If you want to use the new chromosomes in your project, just get the new GeneticSharp version from Nuget.
In the next days I will post a tutorial showing how to optimize a very simple function using GeneticSharp and the new FloatingPointChromosome.
Let’s evolve!
Mods supports represents a great change in Buildron code structure, because we built a mods support that allow any developer add his own mods to Buildron.
The classic visual and behavior that you already know at Buildron was moved to mods, you can see them at Buildron Classic Mods repository.
Right now we have 9 mods developed to Buildron:
ConsoleMod: adds a console window to Buildron and let you monitor his events.
SlackBotMod: adds a bot to your Slack that acts as Buildron and your team can interact with it, like filter builds, sort builds, move camera, receive build status change notifications and taking screenshots.
Do you want to develop your own mod to Buildron? Take a look on our wiki pages about mods and start to build your mod now:
]]>BuildMod running inside Buildron.ModSdk simulator
For these reasons, I made a mod called Buildron.SlackBotMod that let you interact with Buildron through the Slack.
Currently Buildron SlackBotMod has the follow features:
Your team can be notified by Buildron about builds status changed. In the mod preferences you can choose what status you want to receive notifications (running|succes|failed).
Filter buils by status or text.
Reset previous builds filter (no filter).
Sort buils by status, text or date.
Move the camera the amount of pixels define in the x,y,z coordinates.
Reset the camera position.
Take a screenshot of current Buildron state.
If you are not using Buildron yet, give it a try. If you are already using Buildron and Slack, try my Buildron.SlackBotMod.
]]>This is a pretty easy task to perform in C#, because there are some great client libraries implementations to the Slack API, like: SlackApi, MargieBot and SlackConnector. However, in the Unity3d world the story is a little different, because Unity3d uses an older .NET version and those mentioned client libraries are implemented using newer .NET framework versions that are incompatible with Unity3d.
I could have tried to compile those client libraries projects using a older .NET version, what I really tried to do, but almost all are using things like Task and async, that are really not supported by Unity3d .NET version right now.
Then I decided to try a very raw solution using the Unity3d WWWForm and it worked well, very simple, but can be useful to someone. The result is the code bellow:
ModController class will be a MonoBehaviour responsible to showing to the user a window where the Buildron events will be logged. It’s a very simple Unity3d MonoBehaviour that use some GUILayout stuffs to build its UI.
Mod class is the basic class for every Buildron mod and it will be responsible to creating the ModController GameObject and attach to listen a lot of the Buildron events.
Go to Buildron release page and download Buildron-Mod-Template.zip
Unzip the Buildron-Mod-Template.zip.
Open a prompt/terminal and go to the folder of unzip Buildron-Mod-Template.
Type:
jumpstart.exe -n ConsoleMod
If you are using jumpstart in Mac/Linux, remember to call it with “mono “ prefix.
A folder called ConsoleMod should be created. Open the src/Code/ConsoleMod.sln.
Delete the sample file BoxController.cs
Create a new class called ModController and add the code below to it:
public class ModController : MonoBehaviour
{
#region Fields
private string m_title;
private Rect m_windowRect = new Rect(10, 10, 400, 300);
private List<string> m_msgs = new List<string>();
#endregion
#region Constructors
public ModController()
{
m_title = "Console mod (v.{0})".With(GetType().Assembly.GetName().Version);
}
#endregion
#region Methods
/// <summary>
/// Adds the message to the console window.
/// </summary>
/// <param name="message">The message.</param>
/// <param name="args">The arguments.</param>
public void AddMessage(string message, params object[] args)
{
var formattedMessage = message.With(args);
m_msgs.Insert(0, "[{0:HH:mm:ss}] {1}".With(DateTime.Now, formattedMessage));
if (m_msgs.Count > 10)
{
m_msgs.RemoveAt(10);
}
}
void OnGUI()
{
GUILayout.Window(1, m_windowRect, HandleWindowFunction, m_title, GUILayout.MinWidth(100), GUILayout.MinHeight(100));
}
void HandleWindowFunction(int id)
{
GUILayout.BeginVertical();
foreach (var msg in m_msgs)
{
GUILayout.Label(msg);
}
GUILayout.EndVertical();
}
#endregion
}
Replace the content of Mod.cs with the code below:
using Buildron.Domain.Mods;
using UnityEngine;
namespace ConsoleMod
{
/// <summary>
/// Responsible to create the ModController GameObject and attach to listen a lot of the Buildron events.
/// </summary>
public class Mod : IMod
{
/// <summary>
/// Initialize the mod with the context.
/// </summary>
/// <param name="context">The mod context.</param>
public void Initialize(IModContext context)
{
var controller = CreateModController();
ListenEvents(context, controller);
}
private static ModController CreateModController()
{
var go = new GameObject("ConsoleController");
return go.AddComponent<ModController>();
}
private static void ListenEvents(IModContext context, ModController controller)
{
context.BuildFound += (sender, e) =>
{
controller.AddMessage("Build found: {0}", e.Build);
};
context.BuildRemoved += (sender, e) =>
{
controller.AddMessage("Build removed: {0}", e.Build);
};
context.BuildsRefreshed += (sender, e) =>
{
controller.AddMessage("Build refreshed: {0} builds found, {1} builds removed, {2} builds status changed", e.BuildsFound.Count, e.BuildsRemoved.Count, e.BuildsStatusChanged.Count);
};
context.BuildStatusChanged += (sender, e) =>
{
controller.AddMessage("Build status changed: {0}", e.Build);
};
context.BuildTriggeredByChanged += (sender, e) =>
{
controller.AddMessage("Build triggered by changed: {0}/{1}", e.Build, e.Build.TriggeredBy);
};
context.BuildUpdated += (sender, e) =>
{
controller.AddMessage("Build updated: {0}", e.Build);
};
context.CIServerStatusChanged += (sender, e) =>
{
controller.AddMessage("CI server status changed: {0}", e.Server.Status);
};
context.RemoteControlChanged += (sender, e) =>
{
controller.AddMessage("RC changed: {0}", e.RemoteControl);
};
context.UserAuthenticationCompleted += (sender, e) =>
{
controller.AddMessage("User authentication completed: {0}:{1}", e.User, e.Success ? "success" : "failed");
};
context.UserFound += (sender, e) =>
{
controller.AddMessage("User found: {0}", e.User);
};
context.UserRemoved += (sender, e) =>
{
controller.AddMessage("User remoed: {0}", e.User);
};
context.UserTriggeredBuild += (sender, e) =>
{
controller.AddMessage("User triggered build: {0}/{1}", e.User, e.Build);
};
context.UserUpdated += (sender, e) =>
{
controller.AddMessage("User updated: {0}", e.User);
};
}
}
}
Compile the project.
Open the project src/Unity/ConsoleMod inside Unity3d editor.
Open SimulatorScene.
Hit the “Play” button.
You should see a scene like this:
In the menu “Buildron”, click on “Show Simulator”.
Click on “BuildFound”, “BuildStatusChanged” and “BuildRemoved” buttons, you should see these events been registered on ConsoledMod window.
Hit the “Play” button again to stop playing the scene.
There are two folders inside your Unity3d project called “Materials” and “Prefabs”. They are created from Buildron-Mod-Template, but ConsoleMod don’t need them. You can remove them.
ConsoleMod has no Unity3d assets, in this case you don’t need to use the menu “Buildron / BuildMod”, you just need to compile your project inside your IDE, as we did on section “Creating the Mod class” to get your mod inside Buildron.
Go to folder ConsoleMod/build and open the Buildron of your platform.
Hit the play button on Buildron.
You should see the ConsoleMod window registering a lot of Buildron events.
That’s it! We created a Buildron mod that can help to debug what is happening in Buildron and mods in runtime.
So, why you don’t try to create your own Buildron mod now?
You can see the full source code of this post on https://github.com/giacomelli/Buildron-ConsoledMod.
]]>jumpstart is a command-line tool to create new C# projects from prebuilt/templates.
As an experienced developers it’s very common we have some kind of template solution when we start a new project. Maybe it’s the latest project we work on, maybe it’s a very good template we used every time to bootstrap a specific kind of project.
I created jumpstart to simplify the process of create the new project based on those templates or prebuilt solutions.
The idea of the tool was born a long time ago and became a little stronger every time that I had to create a new solution and all the projects by hand, but when I saw the message bellow in the http://xamarin.com/prebuilt page I decided to finally write the tool… and the name was very clear, almost.
The first name that I thought to the tool was prebuilt, but later, my friend Giusepe Casagrande convinced me that jumpstart was really better name… and he was right!
jumpstart is very simple, it get a folder with a template solution and copy it to a new folder replacing the root namespace of the template to the new project namespace.
A template folder like this:
jumpstart-template/MyClass.cs
jumpstart-template/Properties
jumpstart-template/Properties/AssemblyInfo.cs
jumpstart-template/JumpStartTemplate.csproj
JumpStartTemplate.sln
With this command:
jumpstart -n My.Amazing.NewProject
Will become:
My.Amazing.NewProject/MyClass.cs
My.Amazing.NewProject/Properties
My.Amazing.NewProject/Properties/AssemblyInfo.cs
My.Amazing.NewProject/My.Amazing.NewProject.csproj
My.Amazing.NewProject.sln
The MyClass.cs, AssemblyInfo.cs, My.Amazing.NewProject.csproj and My.Amazing.NewProject.sln contents was updated by jumpstart to use the namespace My.Amazing.NewProject.
Download it from the releases page.
To see all available options.
jumpstart.exe -help
If you are using jumpstart in Mac/Linux, remember to call it with “mono “ prefix.
If your template folder is called “jumpstart-template” and its namespace is JumpStartTemplate, the only argument you need to pass to jumpstart is -n(namespace).
jumpstart -n My.Amazing.NewProject
The “jumpstart-template” folder should be in the same folder where you are calling jumpstart.
For example, your template folder is “my-template” and your template namespace is “My.Template”, in this case you should call jumpstart in this way:
jumpstart -tf my-template -tn My.Template -n My.Amazing.NewProject
You can use a remote .zip file as your template folder. For example, if you want to start a new project with any of those prebuilt apps that Xamarin make available on http://xamarin.com/prebuilt, you can use the command bellow to jumpstart your new project using those templates:
jumpstart -tf https://github.com/xamarin/sport/archive/master.zip -tn Sport.Mobile -n My.Sport.Mobile
I hope you can use jumpstart in your next project bootstrap. Download and use it.
If you want to colaborate, take a look on its GitHub repository.
]]>Download the Buildron-Mod-Template.zip from releases page. Unzip it in any folder.
Open a prompt (win) or a terminal (linux/mac).
Go to the folder where you unzip Buildron-Mod-Template.zip.
Type:
jumpstart-exe -n <the name of your mod>
If you are in Linux/Mac andfix commands with “mono “.
For example, if your mod name is “MyAmazingMod”, you should type:
jumpstart.exe -n MyAmazingMod
jumpstart is a tool that I made to help create solutions from pre built templates. I will talk about of it in an future post.
After jumpstart is done you will see a new folder called MyAmazingMod, inside of it there are the following folders:
Open the MyAmazingMod/src/Code/MyAmazingMod.sln.
Select the configuration of your platform.
Inside of it you can see the Mod.cs and ModController.cs.
Compile the whole solution.
Open the MyAmazingMod/src/Unity/MyAmazingMod on Unity3d.
Open the scene Assets/SimulatorScene and click on play button.
Click on the menu “Buildron/Show Simulator”.
In the simulator click on button “BuildStatusChanged”.
You will see a box falling down every time you hit the button. This behavior is made by Mod.cs and ModController.cs on C# project. If you are curious about it, take a look on the classes implementations.
In the Unity3d editor click on menu “Buildron / Build mod”.
Select your platform: Mac, Linux or Windows.
Type your Buildron mods folder:
Click on “Build” button.
Go to folder MyAmazingMod/build and open the Buildron of your platform.
Hit the play button on Buildron.
You should see the same falling down box that you see on simulator falling down inside Buildron every time a build changed status.
With this post you learned how you can starting to create your own mod from mod template.
If you want to build more sofisticated mods, please take a look on our tutorial “Creating a mod”.
]]>This Buildron 2.0.0-RC1 represents a great change in Buildron code structure, because we built a mods support that allow any developer add his own mods to Buildron.
Hope to see you building some mods to Buildron. We’re curious about what crazy mods ideas you could have ;).
We really appreciate your opinion about the mods support, documentation, tutorial and ModSdk. Please, get in touch at @ogiacomelli.
BuildMod running inside Buildron.ModSdk simulator
The classic visual and behavior that you already know at Buildron was moved to mods too, you can see them at Buildron Classic Mods repository.
The full list of mods is available here: Mods list
If you want to build a mod, take a look on this Getting started and this Tutorial creating a mod that teach how build Buildron’s mod.
]]>Emscripten is an LLVM-to-JavaScript compiler. It takes LLVM bitcode - which can be generated from C/C++, using llvm-gcc (DragonEgg) or clang, or any other language that can be converted into LLVM - and compiles that into JavaScript, which can be run on the web (or anywhere else JavaScript can run).
And after this discovered the js-dos project:
A javascript version of dosbox that can run dos programs and games in browser. js-dos provides javascript API to easily run DOS programs and games in browser. This API allows to run unmodified versions of DOS programs, in other words you can run DOS binary in browser.
js-dos use Emscripten and em-dosbox projects do built his easy-to-use api.
Below I made a test with it and put one of my first games, a Nibble clone wrote with 461 lines of C to run in browser.
This is all JS code to start js-dos in this sample:
var dosbox = new Dosbox({
id: "dosbox",
onload: function (dosbox) {
dosbox.run("NIBBLE.zip", "./NIBBLE.EXE");
},
onrun: function (dosbox, app) {
console.log("App '" + app + "' is runned");
}
});
Below is the result. Enjoy it!
The full sample can be download here: sample.
Note: tested on Chrome 64 bits.
]]>We’ve built it as a Skahal’s product and now we decide to open source it: http://github.com/skahal/buildron.
You can run it on Windows, OSX and Linux: https://github.com/skahal/Buildron/releases
There is a remote control app too (Windows, OSX, Linux, iOS and Android), where you can sort and filter builds, among other things: https://github.com/skahal/Buildron-rc/releases
Give it a try:
To fill this gap I created the BadgesSharp service: http://badgessharp.apphb.com. BadgesSharp is a free service to generate badges that need some kind of input and processing before you can display them on GitHub repositories.
In the case of FxCop, we need to run it against our .NET code and send the result report to BadgesSharp and then the service will generate the FxCop badge.
Here is a small tutorial on how to get the FxCop badge in your GitHub repo:</p>
If you don’t have it yet, download and install FxCop.
"C:\Program Files (x86)\Microsoft Fxcop 10.0\FxCopCmd.exe" /project:[Your FxCop file].FxCop /out:fxcop-report.xml
The report will be saved to fxcop-report.xml
Download BadgesSharpCmd and run it:</p>
BadgesSharpCmd -o [your GitHub username] -r [your GitHub repository] -a %GITHUB_REPO_TOKEN% -b FxCop -c fxcop-report.xml
Note: you will need a GitHub personal token: https://github.com/settings/tokens.
More info at:https://badgessharp.apphb.com/Docs/GettingStarted
Edit your readme.md and add the line below:
![FxCop](https://badgessharp.apphb.com/badges/:owner/:repo/FxCop)
Probably you’re using some continuous integration service, below are some samples:
after_build:
- cmd: >
"C:\Program Files (x86)\Microsoft Fxcop 10.0\FxCopCmd.exe" /project:[Your FxCop file].FxCop /out:fxcop-report.xml
BadgesSharpCmd -o [your GitHub username] -r [your GitHub repository] -a %GITHUB_REPO_TOKEN% -b FxCop -c fxcop-report.xml
BadgesSharpCmd -o [your GitHub username] -r [your GitHub repository] -a %GITHUB_REPO_TOKEN% -b FxCop -c "%system.teamcity.build.tempDir%\fxcop-output-*\fxcop-result.xml"
That’s it! Now you have a FxCop badge to show on your GitHub repository.
BadgesSharp support others badges too: StyleCop, DupFinder and Plato.
If you like it, take a look on GitHub repository: https://github.com/giacomelli/BadgesSharp.
]]>I love to used Toggl.com to time tracking and a few days ago I discovered another amazing app to OSX: ControlPlane.
ControlPlane, in few words, is an app that allow you change your desktop configuration (opened apps, notifications, etc) when some trigger happens.
In my specific case I would like to track the time I spend programming in the newest Skahal’s pet project, a Space Invaders Remake or as we like to call: SIR. Well, I wanted to start a “Programming” task on Toggl every time I open Unity3D editor and stop the same task when close the Unity3D.<
I followed steps bellow to get this done:
We will create the follow shell script that uses Toggl api to starts a task:
echo 'Starting "Programming" task on toggl.com'
curl -v -u <api-token>:api_token \
-H "Content-Type: application/json" \
-d '{"time_entry":{"description":"Programming","tags":[],"pid":<project-id>,"created_with":"curl"}}' \
-X POST https://www.toggl.com/api/v8/time_entries/start
As you can see we need to replace two things in this script:
Now save the content on file startTogglTimeEntry.sh
taskid=`curl -v -u <api-token>:api_token -X GET https://www.toggl.com/api/v8/time_entries/current | grep -Eo '"id":([0-9])+' | cut -d: -f2`
curl -v -u <api-token>:api_token \
-H "Content-Type: application/json" \
-X PUT https://www.toggl.com/api/v8/time_entries/$taskid/stop
This script discover the latest task started at Toggl and stop it.
Note: you should replace the <PROJECT-ID> in this script too.
Download and install the ControlPlane.
The first thing, you must create the context, go to ControlPlane preferences:
Select tab “Contexts”, and add I new context, in my case a called it “Skahal”:
Select the tab “Evidences sources”, the option “Running Application” should be checked:
Select the tab “Rules”, add a new rule to your context, the rule must be of type “Running Application” to Unity 3D:
Select the tab “Actions”, we’ll create 3 new actions: the first one is a task to open Safari in the Toggl timer task page, this is an optional action, but I like to see the task running (and I can stop/start it manually sometimes). Add a action of type “Open URL” with the address “https://www.toggl.com/app/timer”, select “On Arrival” and your context:
The second one is a action to start the task in the Toggl when the context starts. Add a action of type “Shell Script”, in the field “Parameter” type the path to your startTogglTimeEntry.sh script, select “On Arrival” and your context:
The third one is a action to stop the task in the Toggl when the context ends. Add a action of type “Shell Script”, in the field “Parameter” type the path to your stopTogglTimeEntry.sh script, select “On Arrival” and your context:
Open Unity3D editor, in the almost same time your context must be activated and the Safari must open the Toggl url with “Programming” task started.
Now, close the Unity3D and the “Programming” task must be stopped on Toggl.com.
That’s it. ControlPlane is an amazing app and things we can automate with it is nearly infinite!
]]>Some time ago I’ve faced that situation again and I thought: “Should be an easy way to read this code documentation!”.
After some googled I found a code from Jim Blackler that allowed developers read C# code documentation at runtime, but at that time the code was just a downloadable .zip in Jim’s blog. I asked him if I could put the source code on GitHub to allow better code improvements and community collaboration, he said: “Please go ahead with your plan”.
So, I created the project at GitHub, DocsByReflection: https://github.com/giacomelli/DocsByReflection
With DocsByReflection you can easy get your code documentation at runtime in many ways, like:
// From type.
var typeDoc = DocsService.GetXmlFromType(typeof(Stub));
// From property.
var propertyInfo = typeof(Stub).GetProperty("PropertyWithDoc");
var propertyDoc = DocsService.GetXmlFromMember(propertyInfo);
// From method.
var methodInfo = typeof(Stub).GetMethod("MethodWithGenericParameter");
var methodDoc = DocsService.GetXmlFromMember(methodInfo);
// From assembly.
var assemblyDoc = DocsService.GetXmlFromAssembly(typeof(Stub).Assembly);
If you want colaborate, just fork it at GitHub.
If you want just use it, there is a NuGet package with latest binaries version:
Install-Package DocsByReflection
Maybe you’ve used it a lot and already love it, in this case just spread the word ;), but if you are a beginner or an experienced .NET developer and don’t know DebuggerDisplay, this is the chance to you to improve your debug skills.
We have a class called Tweet:
public class Tweet
{
public string Text { get; set; }
public User User { get; set; }
public int RetweetsCount { get; set; }
public int FavoritesCount { get; set; }
}
You are debugging a list of Tweets, let me say 200 tweets, and all tweets in the debugger view looks like the image below:
It’s clear that is not easy to know what tweets are inside that list. Of course you can use breakpoint conditions, trace, logs and many others resources to help the debug process, but DebuggerDisplay is an easier and very cheap solution.
In our scenario, the most important things about the Tweet class are the text, the username and the retweets count. We’ll add the DebuggerDisplay attribute to the class:
[DebuggerDisplay("{Text} ({User.UserName}) - RTs: {RetweetCount}")]
public class Tweet
{
public string Text { get; set; }
public User User { get; set; }
public int RetweetsCount { get; set; }
public int FavoritesCount { get; set; }
}
Now, that “secret” tweet list looks like:
When debugging is easier than expected
More information about DebuggerDisplay on official documentation: msdn.microsoft.com/en-us/library/ms228992(v=vs.110).aspx
]]>These functional tests run perfect well on your development machine and now you want to run them on Travis CI, but how to do this without reveal your Google’s username and password worldwide?
Travis CI Encription Keys goes to the rescue! With them you can encrypt your sensitive data and read them inside your tests running on Travis CI.
In this post I will show you a very simple and real sample of using encryption keys to read username and password from environment variables encripted on .travis.yml file
To perform the encryption using Travis CLI you will need to setup a Ruby environment on your dev machine. If you are using Windows and do not have a Ruby environment, the easiest way is use RubyInstaller (don’t be afraid, because it works very well…it’s a fully automatic installation).
After the RubyInstaller finish his job, open the “Start Command Prompt with Ruby” and type:
travis encrypt GDataUsername=[your username] -r [owner/repository]
travis encrypt GDataPassword=[your password] -r [owner/repository]
Open your .travis.yaml file and add the encrypted values from previous step to the file, like the sample below:
env:
global:
- secure: "The GDataUsername encrypted value"
- secure: "The GDataPassword encrypted value"
The tabs are very important to the .yml file, so you should respect them. Here is my real .yml file to help.
Now you can read those environment variable in your code, the sample code below shows how to do this in C#:
var username = Environment.GetEnvironmentVariable("GDataUsername");
var password = Environment.GetEnvironmentVariable("GDataPassword");
The values of username and password variables will be the decrypted values that Travis CI has set to you on the environment.
Commit your files to GitHub and take a look on Travis CI build log, if you have set everything ok, you should see lines as below on log:
$ export GDataUsername=[secure]
$ export GDataPassword=[secure]
Now your functional tests should run on your dev machine (don’t forget to set the environment variables on it too) and on Travis CI as well.
]]>The list below is just a “check-point list”, because, to understand what each one really meaning you need to read book deeply, probably more than one time.
“The objects had behavior and enforced rules. The model wasn’t just a data schema; it was integral solving a complex problem.” (page 13)
“The vital detail about the design is captured in the code.” (page 37)
“Running code doesn’t lie, as any other document might. The behavior of running code is unambiguous.” (page 38)
“When a model doesn’t seem to be practical for implementation, we must search for a new one. When a model doesn’t faithfully express the key concepts of the domain, we must search for a new one.” (page 49)
“Isolating the domain implementation is a prerequisite for domain-driven design.” (page 75)
“It is important to constrain relationship as much as possible.” (page 83)
“When we force an operation into an object that doesn’t fit the object’s definition, the object loses its conceptual clarity and becomes hard to understand or refactor” (page 104)
“When the users or domain experts use vocabulary that is nowhere in the design, that is a warning sign.” (page 207)
“The concept you need is not always floating on the surface, emerging in conversation or documents. You may have to dig and invent. The place to dig is the most awkward part of your design. The place where procedures are doing complicated things that are hard to explain. The place where every new requirement seems to add complexity.” (page 210)
“Constraints make up a particularly important category of model concepts. They often emerge implicitly, and expressing them explicitly can greatly improve a design.” (page 220)
“When the constraints are obscuring the object’s basic responsibility, or when the constraint is prominent in the domain yet not prominent in the model, you can factor it out into an explicit object or even model it as a set of objects and relationships.” (page 222)
“Create explicit predicate-like value objects for specialized purposes. A specification is a predicate that determines if an object does or does not satisfy some criteria.” (page 226)
]]>“A pattern is not a cookbook. It lets you start from a base of experience to develop your solution, and it gives you some language to talk about what you are doing.” (page 227)
DateTime.UtcNow
inside a Linq to Entities
query?
Because we can get unexpected results!
After read this tip, will be expected results 😉
The first one will generate a SQL with WHERE clause like this:
Where @p__linq__1
is the value of our filterDate variable.
The second one will generate this WHERE
clause:
Imagine that we’re using the second query inside some sync algorithm in our C# code, this algorithm is very sensitive about time, now imagine that the server where our C# code is running has a difference about seconds or minutes with the database server?
YES, UNEXPECTED RESULTS!
Linq to Entities is very smart and it is able to translate our DateTime.Now
or DateTime.UtcNow
to a matching command on database side.
The important here is: we should remember that it can do this and we should use features like these with parsimony.
I push the source code and binary files to a GitHub repository: https://github.com/giacomelli/Nibble
]]>