Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7 and 9-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ens et al. (US 20180004393 A1).
Regarding claim 1, Ens et al. discloses an augmented reality (AR) device that generates an interactive experience through interaction with at least one Internet of Things (IoT) device, comprising:
a display (a display device, para. 0125; displaying a 3D virtual environment comprising a first logic node and a second logic node, para. 0006);
a user input device (interface enables users to create, delete, or modify different types of logic nodes (visually representing different types of functions) and create, delete, or modify links (visually representing data paths/connections) between logic nodes within the 3D virtual environment, para. 0024); and
at least one processor (computer system 101 includes a processor 102, input/output (I/O) devices 104, and a memory 110. Memory 110 includes an interface engine 115 configured to interact with a database 114, paras. 0026, 0023) that executes instructions of a runtime (para. 0023) module to determine a type of AR user interface widget or graphical representation to render for the at least one IoT device on the display (For each virtual object representing a particular physical object, a logic node (port node) is generated and displayed for the virtual object, the port node representing the set of functions associated with the particular physical object, para. 0024) by
receiving a selection of the at least one IoT device to be controlled during the interactive experience (interface engine 115, when executed by processing unit 102, receives a user selection for a run tool to execute a program represented in the 3D virtual environment and in response, the interface engine 115 operates in conjunction with the smart objects to execute the program, para. 0093) and
providing the determined type of AR user interface widget or graphical representation for the selected at least one IoT device to the display as an overlay (linker tool 520 enables users to create links between logic nodes and the cutter tool 530 enables users to delete links between logic nodes. The navigator tool 540 enables users to move the viewing position within the 3D virtual environment, para. 0050), and that executes instructions of an experience generation user interface to provide a visual programming interface through which a user provides inputs via the user input device to program the interactive experience by generating a program of behaviors of the selected at least one IoT device using at least one of logic gates or control circuits connected to the selected at least one IoT device in the visual programming interface, the logic gates and control circuits together specifying the behaviors to be performed by the at least one IoT device in the visual programming interface (programming and viewing smart objects and connections within a networked environment, para. 0005),
wherein the at least one processor executes instructions to program the interactive experience by executing instructions to connect respective IoT devices, logic gates, and control circuits to specify program logic and information flow to be used in the interactive experience by attaching an output port of one IoT device, logic gate, or control circuit directly to an input port of another IoT device, logic gate, or control circuit (paras. 0047, 0048; interface engine 115 then generates and displays a port node for each virtual object. Each virtual object represents a particular physical object, and a port node comprises a specific type of logic node that represents a set of functions associated with a corresponding physical object. For each physical object, the interface engine 115 may receive object metadata associated with each physical object, paras. 0039, 0040, 0041); and
wherein the at least one processor executes further instructions to execute the program of behaviors to selectively control the at least one IoT device during the interactive experience (Upon receiving a “run” command, the computer system 101 operates in conjunction with the physical smart objects to execute the program, which implements the set of connected functions, para. 0048).
Regarding claim 2, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions of the runtime module to perform operations comprising:
registering the at least one IoT device in response to registration information received from the at least one IoT device, the registration information including device input and output information for the at least one IoT device (the interface engine 115 may receive object metadata associated with each physical object (e.g., from a user or a BIM model) that further includes descriptions of each function of the physical object. For example, the metadata may specify a sensor function and type of sensor function (e.g., temperature, light, sound, etc.), an actuator function and type of actuator function, or another type of function comprising a program construct. For example, a first physical object may be configured to perform a set of functions including a first sensor function, a second actuator function, and a third function comprising a logical program construct, para. 0039) and
determining from the registration information a type of AR user interface widget or graphical representation to render on the display for the at least one IoT device (The interface engine 115 then generates and displays a first port node for a first virtual object that corresponds to the first physical object, the first port node representing the three functions associated with the first physical object, para. 0039).
Regarding claim 3, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions of the runtime module to perform operations comprising registering the AR device and receiving location information from the AR device (set of logic nodes 600 include a port node 610, trigger node 620, aggregator node 630, filter node 640, converter node 650, and a cloud node 660. As shown, each type of logic node is displayed with a distinct visual appearance so that the user can easily identify the type of each logic node displayed within the 3D virtual environment. For example, the interface may display the different types of logic nodes with different colors, shapes, and/or sizes. As discussed above, a port node 610 represents a set of functions (such as sensor or actuator functions) associated with a corresponding physical object, para. 0051).
Regarding claim 4, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions of the experience generation user interface to program the interactive experience by performing operations comprising accessing at least one of logic gates or control circuits through the visual programming interface of the AR device (logic nodes may be configured by a rule or parameter set by the user. For example, the configurable logic nodes may include the trigger node 620, aggregator node 630, the converter node 650, and the cloud node 660, para. 0059).
Regarding claim 5, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes instructions to generate the program of behaviors of the selected at least one IoT device based on input/output rules for the at least one IoT device, where the input/output rules are represented via Boolean logic (para. 0086), arithmetic functions, n-ary mappings, or operations that are definable mathematically.
Regarding claim 6, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes instructions to generate the program of behaviors of the selected at least one IoT device by performing operations comprising selecting the control circuits to express functions including at least one of a threshold (set of rules/conditions (such as a threshold) may be set by the user, para. 0052), a clamping function, or an oscillator.
Regarding claim 7, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions to perform operations comprising representing the at least one of logic gates or control circuits in the visual programming interface on the display of the AR device as a three-dimensional box with attachment points for input and output connections to other logic gates, control circuits, and IoT devices, the three-dimensional box including a symbol that describes a behavior represented by the three-dimensional box (see paras. 0024, 0048, 0052, 0058).
Regarding claim 9, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions of the experience generation user interface to program the interactive experience to generate a sequence of actions to be performed by the at least one IoT device during the interactive experience (a set of linked logic nodes that represent a set of connected functions, para. 0024; logic node represents a set of functions associated with the first physical object and the second logic node represents a set of functions associated with the second physical object., para.0006).
Regarding claim 10, Ens et al. discloses the AR device of claim 9, wherein the at least one processor executes further instructions of the experience generation user interface to generate the sequence of actions to be performed by the at least one IoT device during the interactive experience by sampling a state of the at least one IoT device and applying the sampled state to a step in a timeline of the generated sequence of actions (display a 3D virtual environment containing multiple virtual objects that represents a real-world environment (such as a room, building, factory, etc.) containing multiple smart objects, para. 0024).
Regarding claim 11, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions of the experience generation user interface to program the interactive experience to use connectors in the visual programming interface of the AR device to connect respective IoT devices, logic gates, and control circuits (3D visual programming interface enables users to create, delete, or modify different types of logic nodes (visually representing different types of functions) and create, delete, or modify links (visually representing data paths/connections) between logic nodes within the 3D virtual environment, para. 0048) to specify program logic and information flow to be used in the interactive experience (authoring of the logic nodes and links produces an executable program. Upon executing the program, data flows between the logic nodes are visually represented as particles moving between the logic nodes, abstract).
Regarding claim 12, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions of the experience generation user interface to connect respective IoT devices, logic gates, and control circuits to specify program logic and information flow to be used in the interactive experience (logic node (port node) is generated and displayed for the virtual object, the port node representing the set of functions associated with the particular physical object., para. 0048) by using at least one gateway node to logically connect respective IoT devices, logic gates, and control circuits separated by a distance in the display of the AR device (perform a set of functions including a first sensor function, a second actuator function, and a third function comprising a logical program construct. The interface engine 115 then generates and displays a first port node for a first virtual object that corresponds to the first physical object, the first port node representing the three functions associated with the first physical object, para. 0039).
Regarding claim 13, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes further instructions of the experience generation user interface to connect respective IoT devices, logic gates, and control circuits to specify program logic and information flow to be used in the interactive experience by using at least one IoT device proxy that represents the at least one IoT device at a spatially separated location from the at least one IoT device and exposes inputs and outputs of the at least one IoT device ( for each virtual object representing a physical object, the interface engine 115 generates and displays a port node on or adjacent to the virtual object, the port node representing a set of function associated with the corresponding physical object, para. 0046).
Regarding claim 14, Ens et al. discloses the AR device of claim 1, wherein the at least one processor executes instructions to execute the program of behaviors to selectively control the at least one IoT device during the interactive experience by dividing execution of the program of behaviors into discrete time steps and updating a state of the at least one IoT device during respective discrete time steps ( logic nodes may comprise both an input connector and an output connector, such as a trigger node 620, aggregator node 630, filter node 640, and converter node 650. For logic nodes having both an input connector and an output connector, the interface may consistently display these logic nodes with the input connectors at the top of the logic nodes and the output connectors at the bottom of the logic nodes, whereby the data always flows from the top to the bottom of these logic nodes. By consistently displaying the input and output connectors of the logic nodes in this manner, the interface provides an indication of the direction of data flow within the 3D virtual environment, para. 0058).
Regarding claim 15, Ens et al. discloses the AR device of claim 1, further comprising a debugging and playback user interface, wherein the at least one processor executes further instructions to debug and play back the program of behaviors of the interactive experience via the debugging and playback user interface of the AR device (debugging a program, para. 0024, inherently provides stepping through a program executing one instruction at a time.).
Claim 16, a method claim, is rejected for the same reason as claim 1.
Claim 17, a method claim, is rejected for the same reason as claim 2.
Regarding claim 18, Ens et al. discloses the method of claim 16, wherein programming the interactive experience comprises at least one of
(a) interacting with the at least one IoT device to attach logic gates or control circuits to inputs or outputs of the at least one IoT device (see paras. 0024, 0048, 0052, 0058);
(b) providing access to logic gates for programming behaviors based on input/output rules of the IoT devices that are represented via Boolean logic (para. 0086), arithmetic functions, n-ary mappings, or operations that are definable mathematically, and to control circuits to express functions including at least one of a threshold, a clamping function, or an oscillator; or
(c) defining a sequence of actions to be performed by the at least one IoT device as part of the interactive experience (a set of linked logic nodes that represent a set of connected functions, para. 0024; logic node represents a set of functions associated with the first physical object and the second logic node represents a set of functions associated with the second physical object., para.0006).
Claim 19, a device claim, is rejected for the same reason as claim 1.
Claim 20, a device claim, is rejected for the same reason as claim 2.
Allowable Subject Matter
Claim 8 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS J LETT/Primary Examiner, Art Unit 2611