Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

AISpace2: An Interactively Visualizable Tool for Learning and Teaching Artificial Intelligence Permalink

Published in AAAI, 2020

AIspace is a set of tools used to learn and teach fundamental AI algorithms. The original version of AIspace was written in Java, and there was not a clean separation of the algorithms and visualization; it was too complicated for students to modify the underlying algorithms. Its next generation, AIspace2, is built on AIPython, an open source Python code that is designed to be as close as possible to pseudocode. AISpace2, visualized in JupyterLab, keeps the simple Python code, and uses the hooks in AIPython to allow visualization of the algorithms. This allows students to see and modify the high-level algorithms in Python, and to visualize the output in a graphical form, hence better helps them to build confidence and comfort in AI concepts and algorithms. So far we have tools for search, constraint satisfaction problems (CSP), planning and Bayesian network. In this paper we outline the tools and give some evaluations based on user feedback.

Investigation on Circadian Action and Color Quality in Laser-Based Illuminant for General Lighting and Display Permalink

Published in IEEE Photonics Journal, 2020

In this work, the genetic algorithm (GA) is employed to optimize both circadian action factor (CAF) and color quality of laser-based illuminants (LBIs) with three, four, and ve spectral bands to disclose its possible use in two common white lighting applications, i.e. bedroom lighting and ofce lighting. Comparing all LBIs at a correlated color temperature (CCT) of 3000 K and a color rendering index of 80, the CAF of four-band LBIs reaches a minimum of 0.238 and maintains at a possibly highest luminous efcacy of radiation (LER) of 422 lm/W among all cases. The performances of white LBIs are also compared with those of white light-emitting diodes (LEDs). The results demonstrate that, under the same conditions of color rendering and color temperature, both four-band LBIs and fourband LEDs exhibit the largest circadian tunability of about 4.7, while four-band LBIs possess much higher LER at the same time compared with four-band LEDs. In addition, for the display application, the investigation on the optimal circadian tunability as a function of color gamut at two CCTs (3000 K and 6500 K) is also performed.

CLIP-PAE: Projection-Augmentation Embedding to Extract Relevant Features for a Disentangled, Interpretable, and Controllable Text-Guided Face Manipulation Permalink

Published in SIGGRAPH, 2023

Recently introduced Contrastive Language-Image Pre-Training (CLIP) bridges images and text by embedding them into a joint latent space. This opens the door to ample literature that aims to manipulate an input image by providing a textual explanation. However, due to the discrepancy between image and text embeddings in the joint space, using text embeddings as the optimization target often introduces undesired artifacts in the resulting images. Disentanglement, interpretability, and controllability are also hard to guarantee for manipulation. To alleviate these problems, we propose to define corpus subspaces spanned by relevant prompts to capture specific image characteristics. We introduce CLIP projection-augmentation embedding (PAE) as an optimization target to improve the performance of text-guided image manipulation. Our method is a simple and general paradigm that can be easily computed and adapted, and smoothly incorporated into any CLIP-based image manipulation algorithm. To demonstrate the effectiveness of our method, we conduct several theoretical and empirical studies. As a case study, we utilize the method for text-guided semantic face editing. We quantitatively and qualitatively demonstrate that PAE facilitates a more disentangled, interpretable, and controllable image manipulation with state-of-the-art quality and accuracy.

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.