DETAILED ACTION
Non-Statutory Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 3-4, 6, 8-11, 12-13, 17 and 19-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6-10 and 17-18 of U.S. Patent No. 12,254,717 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the reference patent are narrower in scope and thus anticipate the claims in the instant application.
Instant Application
12,254,717 B2
1. An interactive system, comprising:
a camera configured to capture imagery of an environment over time; and
a controller configured to:
analyze a first portion of the imagery to identify a guest in the environment;
instruct an output device to provide an initial output to attempt to interact with the guest;
analyze a second portion of the imagery to determine whether the guest responded to the initial output;
in response to determining that the guest responded to the initial output, instruct the output device to provide a guest-specific additional output; and
in response to determining that the guest failed to respond to the initial output, block the output device from providing the guest-specific additional output.
1. An interactive portrait system, comprising:
a camera configured to capture imagery of an environment; and
a controller configured to: analyze the imagery to identify a guest in the environment;
instruct an output device to provide an initial output to attempt to interact with the guest;
analyze the imagery to identify a behavior of the guest during the initial output, after the initial output, or both;
evaluate the behavior of the guest using one or more artificial intelligence algorithms to determine whether the guest responded to the initial output;
in response to determining that the guest responded to the initial output, instruct the output device to provide an additional output to continue to attempt to interact with the guest; and
in response to determining that the guest failed to respond to the initial output: block the output device from providing the additional output;
analyze the imagery to identify an additional guest in the environment; and instruct the output device to provide a respective initial output to attempt to interact with the additional guest.
12. An interactive system, comprising:
a controller configured to:
instruct an output device to provide an initial output to attempt to interact with one or more guests in an environment;
analyze imagery received from one or more cameras to determine whether at least one guest of the one or more guests responded to the initial output;
in response to determining that the at least one guest of the one or more guests responded to the initial output, instruct the output device to provide an enhanced additional output; and
in response to determining that the one or more guests failed to respond to the initial output, block the output device from providing the enhanced additional output.
10. An entertainment venue, comprising: …
a controller configured to: analyze the imagery to identify a guest traveling along the path toward the interactive area; instruct the display, the speaker, or both to provide an initial output to attempt to interact with the guest as the guest approaches the interactive area;
analyze the imagery, the sounds, or both to identify a behavior of the guest during the initial output, after the initial output, or both; evaluate the behavior of the guest using one or more artificial intelligence algorithms to determine whether the guest responded to the initial output; and
in response to determining that the guest responded to the initial output, instruct the display and the speaker to provide an additional output to conduct a conversational interaction with the guest.
1. An interactive portrait system, comprising: …
in response to determining that the guest failed to respond to the initial output, block the output device from providing the guest-specific additional output.
19. A method of operating an interactive system, the method comprising:
instructing, using one or more processors, an output device to provide an initial output to attempt to interact with one or more guests in an environment;
analyzing, using the one or more processors, imagery captured by a camera to determine whether at least one guest of the one or more guests demonstrated signs of interest in the initial output; and
in response to determining that the at least one guest of the one or more guests demonstrated signs of interest in the initial output and using the one or more processors, instructing the output device to provide a guest-specific additional output to attempt to continue to attempt to interact with the at least one guest of the one or more guests.
17. A method of operating an interactive portrait system in an entertainment venue, the method comprising:
…
in response to determining that the guest demonstrated signs of interest in the interactive area of the entertainment venue and using the one or more processors,
instructing an output device to provide an initial output to attempt to interact with the guest as the guest approaches the interactive area of the entertainment venue, wherein the initial output is based on the one or more characteristics of the one or more items worn or carried by the guest;
…
evaluating, using the one or more processors, the behavior of the guest during the initial output, after the initial output, or both to determine whether the guest responded to the initial output; and in response to determining that the guest responded to the initial output and using the one or more processors, instructing the output device to provide an additional output to conduct a conversational interaction with the guest.
Further, dependent claims 3-4 correspond to dependent claims 7-8 in the reference patent. Dependent claim 6 corresponds to dependent claim 6 in the reference patent. Dependent claim 8 corresponds to dependent claim 9 in the reference patent. The limitations of dependent claims 9-10 are disclosed in limitations of independent claim 10. Dependent claim 11 corresponds to dependent claim 18 in the reference patent. Dependent claim 13 is disclosed in the limitations of independent claim 12 in the reference patent. Dependent claim 17 corresponds to dependent claim 18 in the reference patent. Dependent claim 20 corresponds to dependent claim 18 in the reference patent.
Allowable Subject Matter
Claims 7, 14-16 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 12 and 19 is/are rejected under 35 U.S.C. 102a1 as being anticipated by EP 2136329 A2 to Shmueli.
As to claim 1, Shmueli discloses an interactive system (Abstract: "The invention is a comprehensive computer implemented digital signage system and method for personalizing the advertising content according to the requirements and/or desire of a customer that is currently viewing the screen."), comprising:
a camera configured to capture imagery of an environment over time ([0031]: "The sensors 16 are responsible for collecting raw data related to the customer's physical characteristics and to the accessories that he/she carries or wears. A non-limiting list of front-end sensors 16 that can be appropriately placed in the sales area to provide useful information about the customers includes: video cameras, digital still cameras, microphones, volume sensors, motion sensors, heat detecting sensors, keyboards, computer mice, touch pads, and RFID sensors."); and
a controller (Display Manager 24, Fig. 1) configured to: analyze a first portion of the imagery to identify a guest in the environment ([0056]: "a face recognition algorithm running on the computer determines from the images collected by the first camera 42 that at least one customer is watching the screen, the system enters the interactive mode");
instruct an output device to provide an initial output to attempt to interact with the guest ([0056]: "1. Based on these profiles the clips are sorted wherein the clip that most closely matches the gender, age and clothes type of the selected customer is placed first. 2. iv. Play the first clip in the ordered list");
analyze a second portion of the imagery to determine whether the guest responded to the initial output (i.e. raising of a right hand, [0056]);
in response to determining that the guest responded to the initial output, instruct the output device to provide a guest-specific additional output ([0056]: "v. If this customer raises his right hand, then the system recognizes this agreed upon signal and shows the next clip in the list."); and
in response to determining that the guest failed to respond to the initial output (i.e. not raising of right hand, [0056]), block the output device from providing the guest-specific additional output ([0056] "vi. On the other hand, if this customer raises his left hand, then the system shows the previous clip in the list" That is, since the customer did not raise their right hand, the system does not provide the guest specific additional output of playing the next clip but instead goes to a previous clip).
As to claim 12, the same rejection or discussion is used as in the rejection of claim 1.
As to claim 19, the same rejection or discussion is used as in the rejection of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-3 and 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over EP 2136329 A2 to Shmueli in view of US Patent Pub. 2017/0308965 A1 to Morris et al (“Morris”).
As to claim 2, Shmueli fails to disclose comprising a communication device communicatively coupled to the controller, wherein the communication device is configured to communicate with a portable object carried by the guest to retrieve an identifier from the portable object, and the controller is configured to:
use the identifier to search a database for a preferred animation associated with the identifier, wherein the guest-specific additional output comprises an animation that corresponds to the preferred animation.
Morris discloses comprising a communication device (See Fig. 1, 105, 120) communicatively coupled to the controller, wherein the communication device is configured to communicate with a portable object carried by the guest to retrieve an identifier from the portable object (Fig. 1, 135; ¶ 0137; Morris discloses using a user profile (user identifier) may be retrieved from a mobile device.), and the controller is configured to:
use the identifier to search a database for a preferred animation associated with the identifier, wherein the guest-specific additional output comprises an animation that corresponds to the preferred animation (¶ 0125-0126, 0128; Morris discloses upon receiving a user profile a personal greeting or dynamic advertisements including text, graphics, audio and/or video may be presented to a user.).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Shmueil with the teachings of Morris of a communication device communicatively coupled to the controller, wherein the communication device is configured to communicate with a portable object carried by the guest to retrieve an identifier from the portable object, and the controller is configured to: use the identifier to search a database for a preferred animation associated with the identifier, wherein the guest-specific additional output comprises an animation that corresponds to the preferred animation, as suggested by Morris thereby similarly using known configuration for communicating with a user’s mobile device for retrieving profile/identifier information.
As to claim 3, Morris discloses comprising a communication device communicatively coupled to the controller, wherein the communication device is configured to communicate with a portable object carried by the guest to retrieve an identifier from the portable object (See Fig. 1, 120; ¶ 0125-0126, 0128; Morris discloses using a user identifier retrieved from a mobile device.), and the controller is configured to:
use the identifier to search a database for achievements associated with the identifier, wherein the guest-specific additional output is based on the achievements associated with the identifier (¶ 0125-0126; Morris discloses “loyalty rewards identifiers” from the remote user profile server.).
As to claim 8, Morris discloses wherein the initial output comprises a guest-specific initial output (¶ 0126, “personalized greeting”).
As to claim 9, Morris discloses wherein the output device comprises a display and a speaker to provide visual components and audible components in the initial output, the guest-specific additional output, or both (¶ 0131; Morris discloses additional information regarding goods and/or services may be presented to a user which includes textual descriptions, images, audio and/or video.).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over EP 2136329 A2 to Shmueli in view of US Patent Pub. 2020/0201048 A1 to Nakata et al (“Nakata”).
As to claim 4, Shmueli fail to disclose wherein the controller is configured to analyze the first portion of the imagery to identify one or more characteristics of the guest, and the guest-specific additional output is based on the one or more characteristics of the guest.
Nakata discloses wherein the controller is configured to analyze the first portion of the imagery to identify one or more characteristics of the guest, and the guest-specific additional output is based on the one or more characteristics of the guest (¶ 0088; Nakata discloses providing a user with an advertisement in accordance with a performer that a user is focusing his or her attention (characteristic) on.).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Shmueli with the teachings of Nakata wherein the controller is configured to analyze the imagery to identify one or more characteristics of the guest, and the initial output, the additional output, or both are based on the one or more characteristics of the guest, as suggested by Nakata thereby similarly using known configurations for providing personalized content based on a user behavior.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over EP 2136329 A2 to Shmueli in view of US Patent Pub. 2011/0007142 A1 to Perez et al (“Perez”).
As to claim 5, Shmueli fails to disclose wherein the one or more characteristics comprise a clothing color of clothing worn by the guest, a clothing type of the clothing worn by the guest, a symbol on the clothing worn by the guest, a print on the clothing worn by the guest, accessories worn by the guest, a personal possession carried by the guest, or any combination thereof.
Perez discloses wherein the one or more characteristics comprise a clothing color of clothing worn by the guest (¶ 0143), a clothing type of the clothing worn by the guest, a symbol on the clothing worn by the guest, a print on the clothing worn by the guest, accessories worn by the guest, a personal possession carried by the guest, or any combination thereof.
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Shmueil with the teachings of Perez wherein the one or more characteristics comprise a clothing color of clothing worn by the guest, a clothing type of the clothing worn by the guest, a symbol on the clothing worn by the guest, a print on the clothing worn by the guest, accessories worn by the guest, a personal possession carried by the guest, or any combination thereof, as suggested by Perez thereby similarly using known configurations for capturing characteristics of a user that is interacting with a system that images a user.
Claim(s) 6, 10-11, 13, 17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over EP 2136329 A2 to Shmueli in view of US Patent No. 10,555,393 B1 to Fu et al (“Fu”)
As to claim 6, Shmueil fails to disclose wherein the controller is configured to analyze the second portion of the imagery to determine that the guest responded to the initial output based on the guest turning their head toward the output device, focusing their gaze on or near the output device, standing in front of the output device, walking toward the output device, or any combination thereof.
Fu discloses wherein the controller is configured to analyze the second portion of the imagery to determine that the guest responded to the initial output based on the guest turning their head toward the output device, focusing their gaze on or near the output device, standing in front of the output device, walking toward the output device, or any combination thereof (col. 24, lines 20-43; See Fig. 9, 554-566; Fu discloses generating audio such as a simple greeting to draw the attention of the visitor to the camera. The camera sensor 150 captures a face image of the user in the field of view.).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Shmueli with the teachings of Fu wherein the controller is configured to analyze the second portion of the imagery to determine that the guest responded to the initial output based on the guest turning their head toward the output device, focusing their gaze on or near the output device, standing in front of the output device, walking toward the output device, or any combination thereof, as suggested by Fu thereby similarly using known configurations for generating to draw the attention of a user towards an interactive system.
As to claim 10, Fu discloses wherein the controller is configured to analyze the first portion of the imagery to identify the guest traveling along a path in the environment toward the output device, and to instruct the output device to provide the initial output to attempt to interact with the guest as the guest approaches the output device (col. 24, lines 1-43; Fu discloses an interactive stimulus may be used to initiate conversation with a visitor 252 when a person is detected in the field of view of the camera 150.).
As to claim 11, Fu discloses wherein the guest-specific additional output comprises a conversational interaction with the guest, and the controller is configured to use one or more artificial intelligence algorithms to carry out the conversational interaction with the guest (col. 24, lines 1-43; Fu discloses the AI technology may be configured to playback pre-recorded audio in order to initiate conversation with the visitor 252. Once the visitor is facing the camera, a second automated voice message can be output such as, “Can I help you?” with both video and audio recorded by the device.).
As to claim 13, Fu discloses wherein the controller is configured to analyze sounds received from one or more microphones to determine whether the at least one guest of the one or more guests responded to the initial output (col. 24, lines 1-20, “The audio selected by the artificial intelligence may be used to initiate a conversation with the visitor 252. The visitor 252 may respond via the microphone 164”.).
As to claim 17, Fu discloses wherein the enhanced additional output comprises a conversational interaction with the at least one guest of the one or more guests, and the controller is configured to use one or more artificial intelligence algorithms to carry out the conversational interaction with the at least one guest of the one or more guests (col. 22, lines 50-62; col. 24, lines 1-45; Fu discloses artificial intelligence may be configured to interact/conversate with a user.).
As to claim 20, Fu discloses comprising: instructing, using the one or more processors and one or more artificial intelligence algorithms, the output device to carry out a conversational interaction with the at least one guest of the one or more guests to provide the guest-specific additional output (col. 22, lines 50-62; col. 24, lines 1-45; Fu discloses artificial intelligence may be configured to interact/conversate with a user.).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS J LEE whose telephone number is (571)270-7354. The examiner can normally be reached Mon-Fri 10-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS J LEE/Primary Examiner, Art Unit 2624