Large language models (LLMs) are capable of generating cross-domain design knowledge, opening up new possibilities for creating a myriad of design concepts for early-stage design ideation. The current interfaces and interaction capabilities of LLMs, however, pose challenges in controlling the ideation pro- cess in terms of its diversity and quality. To enhance human guidance over the LLM-driven ideation process, we have devel- oped ConceptVis, a system that organizes and symbiotically co- ordinates the LLM-generated design space through an interactive knowledge graph. In ConceptVis, designers can easily control the breadth and depth of the design space by intuitively prompting the LLM from the graph nodes. Natural Language Processing (NLP) algorithms extract concept keywords and related design knowl- edge from LLM responses, which are then added to the knowledge graph for visualization. We conducted a user study with 24 novice designers and compared the performance of ConceptVis with that of a chat-based LLM interface for concept generation. With Con- ceptVis, designers can explore the design space with a balance of breadth and depth. This approach prevents them from merely prompting the LLM to generate random concepts, struggling to keep track of what has been generated in long linear lists, or fixating on early ideas. Supporting users to interact with LLMs through an interactive visual interface significantly improves both the efficiency and quality of concept generation. This result high- lights the importance of developing user-centered design systems to facilitate human-LLM collaboration during the early stages of design.
Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.