DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 19, 20; 2, 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dixit et al. U.S. Patent No. 10,025,447 in view of Zajac, III U.S. Patent No. 8,982,133..
Re: claims 1 and 19 (which are rejected under the same rationale), Dixit teaches
1. (Currently Amended) An information processing apparatus, comprising: at least one processor configured to: control display, of a display object in a first display region, (“The avatar may be executed and rendered by an avatar manager or engine, which may be configured to perform automated speech recognition (ASR) and natural language understanding (NLU), which may then be used to perform searches, respond to questions or provide other dialog by performing text-to-speech conversion, possibly while providing some visual animation to complement verbal dialog. For example, the avatar 106 may be depicted as a monkey ,or virtually any other type of character, and may be animated to move around, wiggle a tail, smile, wink, and/or provide other non-verbal communication, for example.”; Dixit, col. 3, line 58-col. 4, line 2)
The avatar (display object) is executed to provide visual animation to complement verbal dialog by, for example, moving around or wiggling a tail (control display of display object in a first display region.
wherein the first display region is in a specific three-dimensional space, (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in a same house, a same room, within a threshold distance, or otherwise under control of a same user.”; Dixit, col. 7, lines 47-51)
The first device (first display region) is located in a room (is in a specific three-dimensional space).
the display object corresponds to a conversational agent, the conversational agent supports a first function associated with a user;... (“... the avatar 106 may provide user-support by responding to commands and/or by interacting with the user through command tasks and/or conversation dialog to assist the user with a task, answer questions, and/or perform other operations (supports provision of a function for a user while engaging in conversation with the user). The avatar may be depicted to the user by a visible animated character having a character profile, by a voice having a voice profile, or by both depending on the hardware capabilities of the output device, such as the first device 102 or the second device 104. ”; Dixit, col. 3, lines 50-58, Fig. 1)
The avatar (display object) provides user support by responding to commands, interacting with the user through conversation to assist the user with a task, answer questions and perform other operations (corresponding to a conversational agent, the conversational agent supports a first function associated with a user). The avatar (display object corresponding to a conversational agent) is displayed as an animated character having a voice.
... wherein the identified target object is in the specific three-dimensional space, (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in a same house, a same room, within a threshold distance, or otherwise under control of a same user.”; Dixit, col. 7, lines 47-51)
The device manager determines that the first device and the second device (target object) are located in, for example, the same room (the identified target object is in the specific three-dimensional space).
and the control of the display of the display object in the first display region is based on a position of the identified target object in the specific three-dimensional space; (“For example, the avatar may be depicted as walking into the right side of the display of the first device, and thus disappearing from the first device. The exit point maybe a side of the display...”; Dixit, col. col. 13, lines 33-38)
The avatar (display object) is depicted as walking into the right side of the display of the first device (control of the display of the display object in the first region) and disappearing from the first device.
(“For example, the avatar may be depicted as walking into the display of the second device from the left side of the display of the second device, and thus appearing in the display of the second device. The entry point may be a side of the display...”; Dixit, col. 13, lines 42-47)
The avatar is then depicted as walking into the display of the second device (target object) from the left side of the display of the second device, thus appearing in the display of the second device.
(“... the entry and exit points may be coordinated based on a relative position of the first device102 and the second device 104 and/or on logical or arbitrary considerations (e.g., exit right and then appear on left, exit on the bottom and then appear on the bottom (or top), etc.”; Dixit, col. 13, lines 47—52)
The avatar’s entry and exit points of the displays of the first device (first display) and second device (identified target object), is based on the relative position of the first device and the second device (based on a position of the identified target object in the specific three-dimensional space).
and dynamically control display of an animation based on a relative position between the identified target object and the display object, (“... the entry and exit points may be coordinated based on a relative position of the first device102 and the second device 104 and/or on logical or arbitrary considerations (e.g., exit right and then appear on left, exit on the bottom and then appear on the bottom (or top), etc.”; Dixit, col. 13, lines 47-52)
For example, the first device and the second device are positioned such that the avatar (dynamically control display of an animation) appears to exit the first device to the right edge of the first display and enter the second device (target object) on the left edge of the second display (based on a relative position between the identified target object and the display object).
wherein the animation is related to at least one of a first representation of movement of the display object from the first display region to outside of the first display region or a second representation of movement of the display object from the outside of the first display region into the first display region. (“The avatar manager 230 may store various other data in the avatar data, such as exit and entry points used by the avatar to exit/enter the devices during a migration, animation scenarios used for the exit/entry (e.g., jumping to other device, walking to other device, climbing to other device, riding a vehicle to another device, etc.).”; Dixit, col. 8, lines 30-36)
The animation includes, for example, the avatar walking to the other device. When walking to the other device, for example, the avatar walks off the display screen of the first device (a first representation of movement of the display object from the first display region to outside of the first display region). Or, for example, when walking to the other device, the avatar walks off the display screen of the second device (from the outside of the first display region) and enters the first device (a second representation of movement of the display object from the outside of the first display region into the first display region).
Dixit is silent regarding detect a movement trigger, wherein the movement trigger is related to a context associated with the user; identify a target object based on the detected movement trigger, however, Zajac teaches
... detect a movement trigger, wherein the movement trigger is related to a context associated with the user; (“Referring to Fig. 2, a user may be playing with a virtual character online in a browser of a computer 201. A user may then start a mobile application (app) on the user’s mobile phone 202... For example, the user may point a camera of the mobile phone at the screen shot of the virtual character as represented in the browser 203.”; Zajac, col. 7, line s4-12, Fig. 2)
The user points a camera of the mobile phone (detect a movement trigger) at the screen shot of the virtual character that is displayed on the computer. The context of the user is considered to be in close proximity to the computer that is displaying the virtual character (the movement trigger is related to the context associated with the user).
identify a target object based on the detected movement trigger, (“For example, the user may point a camera of the mobile phone at the screen shot of the virtual character as represented in the browser 203... The mobile application may then contact the server and initiate a handshake (or other transfer mechanism) to begin transfer of the virtual character 205... When the handshake is complete, the transfer can be initiated either automatically or manually by the user and include an animates sequence... The mobile application then contacts the server and initiates a handshake (or other transfer mechanism) to begin the transfer of the virtual character 303. The transfer then progresses, with the mobile application playing an animated sequence showing the virtual character entering the mobile phone 304.”; Zajac, col. 7, lines, 10-24, lines 35-40, Fig. 3)
The user points the camera of the mobile phone (detected movement trigger) at the screen shot of the virtual character that is displayed on the computer. The mobile application of the mobile phone, then contacts the server to initiate a handshake to begin transfer of the virtual character (identify a target object based on the detected movement trigger) to the mobile phone. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the computing environment of Dixit by adding the feature of detect a movement trigger, wherein the movement trigger is related to a context associated with the user; identify a target object based on the detected movement trigger, in order to enable the virtual character to interact with the environment surrounding the device currently hosting the virtual character thereby enabling the portable device to form a truly portable virtual character experience for a user, as taught by Zajac, col. 3, lines 41-46).
Claim 20 is a program analogous to the apparatus of claim 1, is similar in scope and is rejected under the same rationale. Claim 20 has an additional limitation. Dixit teaches
20. (Currently Amended) A non-transitory computer-readable medium having stored thereon, computer-executable instructions that when executed by a computer, cause the computer to execute operations, the operations comprising: (“The computing architecture 200 may include one or more processors 202 and one or more computer readable media 204 that stores various modules, applications, programs, or other data. The computer-readable media 204 may include instructions that, when executed by the one or more processor 202, cause the processors to perform the operations described herein...”; Dixit, col. 4, lines 59-67)
The computer readable media includes instructions that when executed by at least one processor, causes the processors to perform operations.
(“Embodiments may be provided as a computer program product including a non-transitory machine-readable storage medium having stored thereon instructions... that may be used to program a computer (or other electronic device) to perform processes or methods described herein.”; Dixit, col. 5, lines 1-6)
A computer program, stored on a machine-readable storage medium is used to program the computer to perform processes and methods.
Re: claim 2, Dixit and Zajac teach
2. (Currently Amended) The information processing apparatus according to claim 1, wherein the animation is related to at least one of the first representation of movement of the display object from the first display region toward the identified target object or the second representation of movement of the display object moving into the first display region from the identified target object . (“At 506, an exit point may be determined for the first device, which may be a location of the first device that the avatar “exits” via a visual display that ends display of the avatar on the first device. For example, the avatar may be depicted as walking into the right side of the display of the first device, and disappearing from the first device.”; Dixit, col. 13, lines 30-35)
For the first device, an exit point is determined, which is a location on the display of the first device where the avatar exits. For example, the avatar is depicted as walking to the right side of the display of the first device and disappearing (the animation is related to at least one of the first representation of movement of the display object from the first display region toward the identified target object).
(“At 508, an entry point may be determined for the second device, which may be a location of the first device that the avatar “enters” via a visual display that begins display of the avatar on the second device. For example, the avatar may be depicted as walking into the display of the second device from the left side of the display of the second device, and thus appearing in the display of the second device.”; Dixit, col. 13, lines 39-45)
For the second device, an entry point is determined, which is a location on the display of the second device where the avatar enters. For example, the avatar is depicted as walking into the display of the second device from the left side of the display of the second device, and appearing in the display of the second device (the second representation of movement of the display object into the first display region from the identified target object).
Re: claim 3, Dixit and Zajac teach
3. (Currently Amended) The information processing apparatus according to claim 1, wherein the identified target object includes electronic equipment that executes a second function associated with the user, (“At 306, the first device may initiate a command to cause the content to be output by the second device. For example, the command may be received by the avatar (via the avatar module 214) by a command such as “Play the movie on my TV,” processed by ASR.”; Dixit, col. 10, lines 10-14)
For example, the first device initiates a command from the user to play the movie on user’s TV (execute a function to be provided to the user), which causes the movie to be played/output by the TV/second device (identified target object includes electronic equipment that executes a second function associated with the user).
and the animation is related to the first representation of movement of the display object from the first display region toward the electronic equipment . (“At 506, an exit point may be determined for the first device, which may be a location of the first device that the avatar “exits” via a visual display that ends display of the avatar on the first device. For example, the avatar may be depicted as walking into the right side of the display of the first device, and disappearing from the first device.”; Dixit, col. 13, lines 30-35)
For the first device, an exit point is determined, which is a location on the display of the first device where the avatar exits. For example, the avatar is depicted as walking to the right side of the display of the first device and disappearing (the animation is related to the first representation of movement of the display object from the first display region toward the electronic equipment).
(“At 508, an entry point may be determined for the second device, which may be a location of the first device that the avatar “enters” via a visual display that begins display of the avatar on the second device. For example, the avatar may be depicted as walking into the display of the second device from the left side of the display of the second device, and thus appearing in the display of the second device.”; Dixit, col. 13, lines 39-45)
For the second device, an entry point is determined, which is a location on the display of the second device where the avatar enters. For example, the avatar is depicted as walking into the display of the second device from the left side of the display of the second device, and appearing in the display of the second device. Thus, the avatar is exiting first device’s display region (movement of the display object from the first display region) and moving toward the second device’s display region (toward the electronic equipment)
Re: claim 4, Dixit and Zajac teach
4. (Currently Amended) The information processing apparatus according to claim 3, wherein the at least one processor is further configured to control the electronic equipment to execute the second function . (“At 306, the first device may initiate a command to cause the content to be output by the second device. For example, the command may be received by the avatar (via the avatar module 214) by a command such as “Play the movie on my TV,” processed by ASR.”; Dixit, col. 10, lines 10-14)
For example, the first device initiates a command from the user to play the movie on user’s TV, which causes the movie to be played/output by the TV/second device (to control the electronic equipment to execute the second function).
Re: claim 5, Dixit and Zajac teach
5. (Currently Amended) The information processing apparatus according to claim 3, wherein at least one processor is further configured to the control section dynamically control display of an animation related to at least one of a third representation of movement of the display object in the first display region or a fourth representation of movement of the display object, in a second display region of the electronic equipment , the third representation of movement of the display object corresponds to a representation of movement of the display object toward the second display region and the fourth representation of movement of the display object corresponds to a representation of movement of the display object from the first display region. (“... For example, the avatar that assists the user on a tablet computing device may be depicted as a monkey that has a female voice. The avatar, through animation, may exit the tablet computer by swinging from a virtual rope. To create a consistent and appealing experience, the television may depict the same monkey (avatar) as swinging into the display area of the television.”; Dixit, col. 2, lines 27-34)
The avatar (display object) may be depicted as a monkey that exits the tablet (first display region that is displaying the display object) computer by swinging from a virtual rope (the third representation of movement of the display object corresponds to a representation of movement of the display object toward the second display region). The television then depicts the same monkey swinging into the display area of the television (the fourth representation of movement of the display object corresponds to a representation of movement of the display object from the first display region)
Re: claim 6, Dixit and Zajac teach
6. (Currently Amended) The information processing apparatus according to claim 1, wherein the identified target object includes a specific structure, the specific structure is in the specific three-dimensional space, and the animation is related to the first representation of movement of the display object from the first display region toward the specific structure . (“For example, a user may begin to interact with a video on a tablet computing device. During this interaction... the user may interact with an avatar, which may assist the user with finding the video, playing the video, adjusting preferences of the video, and/or performing other tasks. The user may then cause the video to be displayed or moved (migrated) to the second computing device, such as a television.”; Dixit, col. 1, line 61-col 2, line 6)
The avatar is migrated from a tablet computing device to a television (the identified target object includes a specific structure, the specific structure is in the specific three-dimensional space).
(“... the migration of the avatar may be coordinated to create a visually consistent and visually appealing user experience. For example, the avatar that assists the user on a tablet computing device may be depicted as a monkey that has a female voice. The avatar, through animation, may exit the tablet computer by swinging from a virtual rope. To create a consistent and appealing experience, the television may depict the same monkey (avatar) as swinging into the display area of the television.”; Dixit, col. 2, lines 27-34)
The migration of the avatar is coordinated to create a visually consistent experience, such that for example, the avatar (display object) assisting the user on the tablet computing device, exits the tablet computing device through animation by swinging from a virtual rope (the animation is related to the first representation of movement of the display object from the first display region toward the specific structure). Then, the television (specific structure is in the specific three-dimensional space) depicts the same avatar swinging into the display area (movement of the display object from the first display region toward the specific structure) of the television.
Re: claim 7, Dixit and Zajac teach
7. (Currently Amended) The information processing apparatus according to claim 1, wherein the position of the identified target object is associated with an identifier of the identified target object . “The device manager 226 may determine which devices are available for use by the handoff manager 232 for handoff of the content and/or the avatar... For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in the same house, a same room, within a threshold distance, or otherwise under control of the same user. The device manager may determine the location of devices by communicating with those devices, through triangulation of signals, through optical detection, and/or by other locating techniques.”; Dixit, col. 7, lines 45-55, Fig. 2)
The device manage determines which devices are available for handing off the display of the avatar, where the first device and the second device (target object) are located in, for example the same house, same room (three-dimensional space) or are within a threshold distance of each other (the position of the identified target object is associated with an identifier of the identified target object).
(“At 410, the remote computing device 110 may determine available devices which may receive a handoff of the content and the avatar. The devices may be determined by proximity to the first device, control by a user of the first device, type of device, and/or other information about the devices.”; Dixit, col. 12, lines 10-17)
Also, the remoting computing device determines which devices are available for handoff of the display of the avatar. The devices may be determined by proximity to the first device (the position of the identified target object is associated with an identifier of the identified target object).
Re: claim 8, Dixit and Zajac teach
8. (Currently Amended) The information processing apparatus according to claim 7, wherein the at least one processor is further configured to identify the target object based on designation provided by the user. (“At 306, the first device may initiate a command to cause the content to be output by the second device. For example, the command may be received by the avatar (via the avatar module 214) by a command such as “Play the movie on my TV,” processed by ASR.”; Dixit, col. 10, lines 10-14)
For example, the first device initiates a command from the user to play the movie on user’s TV (identify the target object based on designation provided by the user), which causes the movie to be played/output by the TV/second device.
Re: claim 9, Dixit and Zajac teach
9. (Currently Amended) The information processing apparatus according to claim 7, wherein the at least one processor is further configured to identify the target object based on a direction designated by the user. (“At 306, the first device may initiate a command to cause the content to be output by the second device. For example, the command may be received by the avatar (via the avatar module 214) by a command such as “Play the movie on my TV,” processed by ASR.”; Dixit, col. 10, lines 10-14)
For example, the first device initiates a command from the user (direction designated by the user) to play the movie on user’s TV (target object), which causes the movie to be played/output by the TV/second device (identify the target object based on a direction designated by the user).
Re: claim 11, Dixit and Zajac teach
11. (Currently Amended) The information processing apparatus according to claim 1, wherein the context includes speech of the user and behavior of the user. (“At 302, the first device 102 may initiate the avatar for intersection with a user of the first device. For example, the first device may cause the avatar to appear in response to an audio file or audio stream associated with a user request (e.g., a wake word, a press of a control, etc.). The avatar may appear as an animated character, as a voice, or both... The user may ask the avatar to access, retrieve or otherwise find a specified content. ”; Dixit, col. 9, lines 57-67)
The user initiates the avatar on the first device, by for example, an audio stream associated with the user (speech of the user) and pressing a control button (behavior of the user) to request that the avatar access or retrieve specified content, such as a movie.
(“At 306, the first device may initiate a command to cause the content to be output by the second device. For example, the command may be received by the avatar (via the avatar module 214) by a command such as “Play the movie on my TV,” processed by ASR.”; Dixit, col. 10, lines 10-14)
For example, the first device initiates a command, such as a voice command from the user for the movie to be played on the TV (context includes speech of the user) to play the movie on user’s TV (target object), which causes the movie to be played/output by the TV/second device.
Re: claim 13, Dixit and Zajac teach
13. (Currently Amended) The information processing apparatus according to claim 1, further comprising a memory configured to store three-dimensional map information related to the specific three-dimensional space. (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in the same house, a same room, within a threshold distance, or otherwise under control of the same user. The device manager 226 may determine the location of the devices by communicating with those devices, through triangulation of signals, through optical detection and/or by other locating techniques... the device manager 226 may maintain a registration or log of devices and respective relationships with other devices, which may be updated in response to request from the user.”; Dixit, col. 7, lines 47-58)
The device manager determines the location of each of the devices within, for example, a room (three-dimensional space). The device manager also maintains a registration or log (memory configured to store three-dimensional informational map information related to the specific there-dimensional space) of the devices and their respective relationships with other devices.
Re: claim 14, Dixit and Zajac teach
14. (Currently Amended) The information processing apparatus according to claim 13, wherein the memory is further configured to store an identifier of the identified target object in association with the position of the identified target object . (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in the same house, a same room, within a threshold distance, or otherwise under control of the same user. The device manager 226 may determine the location of the devices by communicating with those devices, through triangulation of signals, through optical detection and/or by other locating techniques... the device manager 226 may maintain a registration or log of devices and respective relationships with other devices, which may be updated in response to request from the user.”; Dixit, col. 7, lines 47-58)
The device manager identifies each device and determines the location of each of the devices within, for example, a room. For example, the device manager determines whether a second device (identified target object) is within a threshold distance of a first object (position of the identified target object). The device manager also maintains a registration or log (stores an identifier of the identified target object in association with the position of the identified target object) of the devices and their respective relationships with other devices.
Re: claim 15, Dixit and Zajac teach
15. (Currently Amended) The information processing apparatus according to claim 14, wherein the at least one processor is further configured to acquire the position of the identified target object based on an image captured by the identified target object in the specific three-dimensional space. (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in the same house, a same room, within a threshold distance, or otherwise under control of the same user. The device manager 226 may determine the location of the devices by communicating with those devices, through triangulation of signals, through optical detection and/or by other locating techniques... the device manager 226 may maintain a registration or log of devices and respective relationships with other devices, which may be updated in response to request from the user.”; Dixit, col. 7, lines 47-58)
The device manager identifies each device and determines the location of each of the devices within, for example, a room (three-dimensional space). For example, the device manager determines the location of the second object (identified target object) using optical detection (based on an image captured by the identified target object in the specific three-dimensional space). The device manager also maintains a registration or log of the devices and their respective relationships with other devices.
Re: claim 16, Dixit and Zajac teach
16. (Currently Amended) The information processing apparatus according to claim 14, wherein the at least one processor is further configured to acquire the position based on an image of the target object, and the image is associated with the specific three- dimensional space. (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in the same house, a same room, within a threshold distance, or otherwise under control of the same user. The device manager 226 may determine the location of the devices by communicating with those devices, through triangulation of signals, through optical detection and/or by other locating techniques... the device manager 226 may maintain a registration or log of devices and respective relationships with other devices, which may be updated in response to request from the user... the device manager 226 may determine a device status based on interacting with the device... through user input, and/or from information from another device, such as from the first device 102 that may capture information about the second device 104 via one of the input components 208, such as a camera or a radio receiver”; Dixit, col. 7, lines 47-58)
The device manager identifies each device and determines the location of each of the devices within, for example, a room (three-dimensional space). For example, the device manager determines the location of the second object (target object) using optical detection, such as one device capturing information from another device using camera (acquire the position based on an image of the target object and the image is associated with the specific three-dimensional space). The device manager also maintains a registration or log of the devices and their respective relationships with other devices.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dixit in view of Zajac as applied to claim 1 above, and further in view of Itoh et al. U.S. Pub. No. 2014/0289362.
Re: claim 12, Dixit and Zajac are silent regarding the context includes a position of the user, however Itoh teaches
12. (Currently Amended) The information processing apparatus according to claim 1, wherein the context includes a position of the user. (“In the case where the character 900 is not associated with an y portable terminal 3, the associating section 510 associates the portable terminal 3 with the character 900 in response to an instruction from a user and based on location information of the character 900 and location information of the portable terminal 3 at the present time... when the catch operation is made by use of the a user’s portable terminal 3, the associating section 510 associates the character 900 with the user’s portable terminal 3 on the condition that the character 900 is not associated with any portable terminal 3 and that the user’s portable terminal 3 is present in a predetermined range from the current location of the character 900. ”; Itoh, [0050])
The portable terminal is identified based on the user’s portable terminal being present within a predetermined range from the current location of the character (the context includes a position of the user). Itoh is combined with Dixit and Zajac such that the character of Itoh is the avatar of Dixit. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the computing environment of Dixit by adding the feature of the context includes a position of the user, in order to improve the game property in terms of movement of the character as well as gently promoting the users to move to a particular place, as taught by Itoh ([0034]).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dixit in view of Zajac as applied to claim 16 above, and further in view of Lee et al. U.S. Pub. No. 2020/0338453.
Re: claim 17, Dixit and Zajac teach
17. (Currently Amended) The information processing apparatus according to claim 16, wherein the at least one processor is further configured to acquire the position of the target object, based on a marker in the image . (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in the same house, a same room, within a threshold distance, or otherwise under control of the same user. The device manager 226 may determine the location of the devices by communicating with those devices, through triangulation of signals, through optical detection and/or by other locating techniques... the device manager 226 may maintain a registration or log of devices and respective relationships with other devices, which may be updated in response to request from the user... the device manager 226 may determine a device status based on interacting with the device... through user input, and/or from information from another device, such as from the first device 102 that may capture information about the second device 104 via one of the input components 208, such as a camera or a radio receiver”; Dixit, col. 7, lines 47-58)
The device manager identifies each device and determines the location of each of the devices within, for example, a room (three-dimensional space). For example, the device manager determines the location of the second object (position of the target object) using optical detection, such as one device capturing information from another device using camera. Dixit and Zajac are silent regarding the position of the target object being based on a marker in the image, however Lee teaches this limitation.
(“The first wearable display device 401 to the fourth wearable display device 404 may be distinguished from each other by attaching markers M thereto in different patterns so that the detecting device 100 may track positions of each wearable display device.”; Lee, [0322], Fig. 9)
Fig. 9 illustrates users with their display devices in the three-dimensional space. The detecting device uses a different marker to detect and track the positions of each display device (acquire the position of the target object based on a marker in the image). Lee is combined with Dixit and Zajac such that the optical detection of Dixit is used to detect the markers of Lee. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the computing environment of Dixit by adding the feature of the at least one processor is further configured to acquire the position of the target object based on a marker in the image, in order to track the positions of multiple display devices, as taught by Lee ([0322]).
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dixit in view of Zajac as applied to claim 16 above, and further in view of Mittleman et al. U.S. Pub. No. 2019/0385373.
Re: claim 18, Dixit and Zajac teach
18. (Currently Amended) The information processing apparatus according to claim 16, wherein theat least one processor is configured to acquire the position of the identified target object based on a shape of the identified target object in the image. (“For example, the device manager 226 may determine that the first device 102 and the second device 104 are located in the same house, a same room, within a threshold distance, or otherwise under control of the same user. The device manager 226 may determine the location of the devices by communicating with those devices, through triangulation of signals, through optical detection and/or by other locating techniques... the device manager 226 may maintain a registration or log of devices and respective relationships with other devices, which may be updated in response to request from the user... the device manager 226 may determine a device status based on interacting with the device... through user input, and/or from information from another device, such as from the first device 102 that may capture information about the second device 104 via one of the input components 208, such as a camera or a radio receiver”; Dixit, col. 7, lines 47-58)
The device manager identifies each device and determines the location of each of the devices within, for example, a room (three-dimensional space). For example, the device manager determines the location of the second object (position of the identified target object) using optical detection, such as one device capturing information from another device using camera. Dixit and Zajac are silent regarding the position of the target object being acquired based on a shape of the identified target object in the image, however, Mittleman teaches this limitation.
(“For example, a user may select a portion of the screen indicating the location of the smart-home device. The mobile device may also use computer vision algorithms to recognize the shape of the smart-home device in the image and thereby determine the location of the smart-home device.”; Mittleman, [0144])
Computer vision algorithms are used to recognize the shape of the smart-home device (acquire the position of the identified target object based on a shape of the identified target object in the image) and to determine the location of the smart-home device. Mittleman is combined with Dixit and Zajac such that the computer vision algorithms of Mittleman are included in the computing environment of Dixit. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the computing environment of Dixit by adding the feature of the at least one processor is configured to acquire the position of the identified target object based on a shape of the identified target object in the image , in order to use the shape of the detected device to proportionally estimate, for example, whether the distance between the first device and the detected device is increasing, as taught by Mittleman ([0111]).
Response to Arguments
Applicant’s arguments, see Amendment/Request for Reconsideration-After Non-Final Rejection, filed 9/04/2025, with respect to the Objection to the Specification have been fully considered and are persuasive. The Objection to the Specification of the previous Office Action has been withdrawn.
Applicant’s arguments, see Amendment/Request for Reconsideration-After Non-Final Rejection, filed 9/04/2025, with respect to the Claim Interpretation have been fully considered and are persuasive. The Claim Interpretation of the previous Office Action has been withdrawn.
Applicant's arguments filed 9/04/2025 have been fully considered but they are not persuasive. Applicant argues:
“The Applicant submits that Dixit does not expressly or inherently describe at least, for example, the features of “detect a movement trigger, wherein the movement trigger is related to a context associated with the user... identify a target object based on the detected movement trigger,” as recited in amended independent claim 1... Dixit describes that the first device initiates the avatar in response to the user action, such as the pressing the control button or the speaking a wake word. Dixit further describes that the first device receives a command like “Play the movie on my TV,” which is processed and sent to the second device for output. However, Dixit does not describe detection of a movement trigger related to a context associated with a user. Further, Dixit does not describe that the second device is identified based on the detected movement trigger. Therefore, Dixit does not expressly or inherently describe the features of “detect a movement trigger, wherein the movement trigger is related to a context associated with the user... identify a target object based on the detected movement trigger,” as recited in amended claim 1. Therefore, amended independent claim 1 is not anticipated by Dixit. Further, the Applicant submits that amended independent claims 19 and 20 are also not anticipated by Dixit at least for the reasons stated above with the regard to amended independent claim 1.”
This amended limitation of the movement trigger is taught by Zajac. Zajac teaches that the user points a camera of the mobile phone (detect a movement trigger) at the screen shot of the virtual character that is displayed on the computer. The context of the user is considered to be in close proximity to the computer that is displaying the virtual character (the movement trigger is related to the context associated with the user). (Zajac, col. 7, line s4-12, Fig. 2). And, the mobile application of the mobile phone, then contacts the server to initiate a handshake to begin transfer of the virtual character (identify a target object based on the detected movement trigger) to the mobile phone. (Zajac, col. 7, lines, 10-24, lines 35-40, Fig. 3). Claims 1, 19 and 20 have been rejected. Please see the corresponding rejections.
Applicant's arguments filed 9/04/2025 have been fully considered but they are not persuasive. Applicant argues:
“... claims 2-9, 11, and 13-16 are also not anticipated by Dixit based at least on the dependence on amended independent claim 1. Further, each of the dependent claims 2-9, 11, and 13-16 separately recited subject matter not described by Dixit.”
Examiner disagrees. Claims 1 and claims 2-9, 11, and 13-16 have been rejected. Please see the corresponding rejections.
Applicant's arguments filed 9/04/2025 have been fully considered but they are not persuasive. Applicant argues:
“Itoh does not remedy the above-noted deficiencies of Dixit. The Applicant respectfully submits that claim 12 is not taught, suggested, or rendered obvious over the combination of Itoh and Dixit based at least on the dependence on amended independent claim 1. Furthermore, dependent claim 12 recites subject matter not described or suggested by any of the cited references, whether taken individually or in combination.”
Examiner disagrees. Claims 1 and 12 have been rejected. Please see the corresponding rejections.
Applicant's arguments filed 9/04/2025 have been fully considered but they are not persuasive. Applicant argues:
“Lee does not remedy the above-noted deficiencies of Dixit. The Applicant respectfully submits that claim 17 is not taught, suggested, or rendered obvious over the combination of Lee and Dixit based at least on the dependence on amended independent claim 1. Furthermore, dependent claim 17 recites subject matter not described or suggested by any of the cited references, whether taken individually or in combination.”
Examiner disagrees. Claims 1 and 17 have been rejected. Please see the corresponding rejections.
Applicant's arguments filed 9/04/2025 have been fully considered but they are not persuasive. Applicant argues:
“Mittleman does not remedy the above-noted deficiencies of Dixit. The Applicant respectfully submits that claim 18 is not taught, suggested, or rendered obvious over the combination of Mittleman and Dixit based at least on the dependence on amended independent claim 1. Furthermore, dependent claim 18 recites subject matter not described or suggested by any of the cited references, whether taken individually or in combination.”
Examiner disagrees. Claims 1 and 18 have been rejected. Please see the corresponding rejections.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA J RICKS whose telephone number is (571)270-7532. The examiner can normally be reached on M-F 7:30am-5pm EST (alternate Fridays off).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have qu