DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of Group 1 in the reply filed on 12/18/2025 is acknowledged. The arguments are persuasive, and the restriction requirement has been traversed.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The disclosure is objected to because of the following informalities:
Paragraph 37, “of the some” should be “of some”
Paragraph 37, “for the some” should be “for some”
Paragraph 37, “that the some” should be “that some”
Paragraph 37, “The some” should be “Some”.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “the one or more object emitters, to one or more special effect components in the interactive environment, or both to generate special effect outputs”; “one or more light emitters configured to emit light”; “the one or more light emitters to emit light”; “one or more object emitters to generate the special effect outputs”; “one or more haptic devices configured to emit haptic effects”; and “the one or more object emitters to emit light, sounds, haptic effects, or any combination thereof” in claims 1-5, 10, 15 and 22-23.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2 and 4-6, 9-10, 15, 21, and 25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a mental process carried out by generic computer elements. This judicial exception is not integrated into a practical application because the generic computer components do not amount to significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the recited, processing circuitry, processors, memory, and speakers are all well-known computer parts that are conventional. The further, haptic devices, light emitters, and object emitters are extremely broad and general terms that can reflect a number of devices. Haptic devices is merely a device that can give haptic feedback or feedback that stimulates touch, light emitters are merely something the emits light, and object emitters is something that emits some form of object whether it be a physical or non-physical object.
In regards to claim 1, an interactive object control system, comprising: processing circuitry comprising one or more processors; and memory storing instructions, executable by the processing circuitry to cause the processing circuitry to: determine a respective signal strength of respective signals received at communication circuitry from respective interactive objects of a plurality of interactive objects in an interactive environment (Signal strength is given an example in the specification of proximity between objects, as such, a person can determine this from a variety of object and may receive signals indicating this); select a best candidate interactive object from the plurality of interactive objects based on the respective signal strength of the respective signals (A person of ordinary skill in the art can choose one object that they believe is the best based off of the respective signal strengths of all the objects); send instructions to the best candidate interactive object to activate one or more object emitters of the best candidate interactive object (As object emitters is very broadly defined, a person could send instructions to an emitter such as a voice controlled switch to turn on something such as a light or simply to another person to activate some form of emitter or even to emit noise themselves); in response to detection of an emission from the one or more object emitters of the best candidate interactive object, tag the best candidate interactive object as a target interactive object (A person in response to this emission, could then tag an object by applying some form of physical tag or identifier to an object); track motion of the target interactive object (A person could then physically track the motion of this object using their eyes); and provide output instructions to the one or more object emitters, to one or more special effect components in the interactive environment, or both to generate special effect outputs based on the motion of the target interactive object (A person of ordinary skill in the art could then provide instructions to a person or a voice-controlled device to provide special effects whether that be lighting or sound effects for example a “whooshing” noise if they are swinging the sword object or any number of possible combinations).
In regards to claim 2, wherein the one or more object emitters comprises one or more light emitters configured to emit light, one or more speakers configured to emit sounds, or both (A speaker, in this case, is intended to be a literal computer speaker, but the BRI would include human speakers as narration is a common theme and special effect in various games, amusement park rides, and so on).
In regards to claim 4, wherein the instructions are executable by the processing circuitry to cause the processing circuitry to provide the output instructions to the one or more object emitters to generate the special effect outputs (A person of ordinary skill in the art could provide instructions to a person or voice-controlled device to provide special effects whether that be lighting or sound effects for example a “whooshing” noise if they are swinging the sword object or any number of possible combinations).
In regards to claim 5, wherein the one or more object emitters comprises one or more light emitters configured to emit light, one or more speakers configured to emit sounds, one or more haptic devices configured to emit haptic effects, or any combination thereof, and the output instructions are configured to activate the one or more object emitters to cause the one or more light emitters to emit light, the one or more speakers to emit the sounds, the one or more haptic devices to emit the haptic effects, or any combination thereof (A person of ordinary skill in the art could provide instructions to a person or voice-controlled device to provide special effects whether that be lighting, haptic, or sound effects for example a “whooshing” noise if they are swinging the sword object or any number of possible combinations).
In regards to claim 6, wherein the instructions are executable by the processing circuitry to cause the processing circuitry to provide the output instructions to the one or more special effect components in the interactive environment to generate the special effect outputs, and the one or more special effect components comprises a display, a light emitter, a speaker, a haptic device, or any combination thereof (A speaker, in this case, is intended to be a literal computer speaker, but the BRI would include human speakers as narration is a common theme and special effect in various games, amusement park rides, and so on and further, a haptic device is merely a device to provide some haptic feedback which a human can facilitate by merely shaking a device held by someone else, and even further a human being can use a pen and paper to display an image or words that are desired to be displayed).
In regards to claim 9, wherein the instructions are executable by the processing circuitry to cause the processing circuitry to: in response to no detection of the emission from the one or more object emitters of the best candidate interactive object, select an additional best candidate interactive object from the plurality of interactive objects based on the respective signal strength of the respective signals (A person of ordinary skill in the art could observe that their designated object emitter was not emitting the relevant special effects, and then choose an alternative object); and send instructions to the additional best candidate interactive object to activate one or more additional object emitters of the additional best candidate interactive object (Then this person of ordinary skill in the art could inform the next object emitter of some new instructions, so for our example from earlier, they could substitute the sword for a toy airplane and have the new emitter make plane noises instead of the aforementioned “whooshing sound”.
In regards to claim 10, it is similar to claim 1, and it is rejected similarly.
In regards to claim 15, wherein generating the special effect outputs based on the motion of the target interactive object comprises sending the output instructions to activate the one or more object emitters to emit light, sounds, haptic effects, or any combination thereof (A person of ordinary skill in the art could provide instructions to a person or voice-controlled device to provide special effects whether that be lighting, haptic, or sound effects for example a “whooshing” noise if they are swinging the sword object or any number of possible combinations).
In regards to claim 21, wherein the respective signals are indicative of respective unique identifiers of the respective interactive objects of the plurality of interactive objects, and the instructions are executable by the processing circuitry to cause the processing circuitry to: use the respective unique identifier of the best candidate interactive object to direct the instructions to the best candidate interactive object to activate the one or more object emitters of the best candidate interactive object (A person of ordinary skill in the art could identify some form of identifier or tag and then choose which object emitter would be best based off of this tag).
In regards to claim 25, wherein the target interactive object comprises a sword, a wand, a token, a medallion, headgear, a figurine, a stuffed animal, a clothing item, a jewelry item, a portable object, or any combination thereof (The object used could be essentially anything identified by the user which would render these examples insignificant extra-solution activity).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6, 10, 15, 18-19, and 23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yeh et al. (US 20190220635 A1).
Claims 1-6, 10, 15, 18-19, and 23 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yeh et al. (US 20190220635 A1), hereinafter referred to as Yeh.
The applied reference has a common inventor and applicant with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
In regards to claim 1, Yeh discloses an interactive object control system, comprising: processing circuitry comprising one or more processors; and memory storing instructions, executable by the processing circuitry to cause the processing circuitry to: determine a respective signal strength of respective signals received at communication circuitry from respective interactive objects of a plurality of interactive objects in an interactive environment (Paragraph 33, Paragraph 33 discloses that the certain interactive objects are activated within a certain distance and this reads upon the determination of a best object by signal strength as proximity is described within this applications specification as a way to measure signal strength); select a best candidate interactive object from the plurality of interactive objects based on the respective signal strength of the respective signals (Paragraph 28, This paragraph details that a specific object can be chosen to be activated which would be within a BRI of best candidate objects); send instructions to the best candidate interactive object to activate one or more object emitters of the best candidate interactive object (Paragraph 28, This paragraph details that a specific object can be chosen to be activated which would be within a BRI of best candidate objects); in response to detection of an emission from the one or more object emitters of the best candidate interactive object, tag the best candidate interactive object as a target interactive object (Paragraph 17, Discloses the use of an RFID tag that would be within a BRI of tagging and that this is used to track the individual or object); track motion of the target interactive object (Paragraph 44, This paragraph discloses that the subject can be tracked via emitted light); and provide output instructions to the one or more object emitters, to one or more special effect components in the interactive environment, or both to generate special effect outputs based on the motion of the target interactive object (Paragraph 28, This paragraph details that a specific object can be chosen to be activated which would be within a BRI of best candidate objects with the lights being activated being within a BRI of special effects).
In regards to claim 2, Yeh discloses wherein the one or more object emitters comprises one or more light emitters configured to emit light, one or more speakers configured to emit sounds, or both (Paragraph 24, discloses that the feedback device can consist of LEDs and speakers to provide feedback to the user when the objects are activated).
In regards to claim 3, Yeh discloses wherein the one or more object emitters comprises one or more light emitters configured to emit light, and the instructions to the best candidate interactive object to activate the one or more object emitters of the best candidate interactive object cause the one or more light emitters to emit light (Paragraph 24, the feedback system receives the signals which activate certain responses since the feedback device also has LEDs, these would be activated when relevant).
In regards to claim 4, Yeh discloses wherein the instructions are executable by the processing circuitry to cause the processing circuitry to provide the output instructions to the one or more object emitters to generate the special effect outputs (Paragraph 24, the feedback system receives the signals which activate certain responses since the feedback device also has LEDs, Haptics, and speakers, these would be activated when relevant).
In regards to claim 5, Yeh discloses comprising the plurality of interactive objects, wherein the one or more object emitters comprises one or more light emitters configured to emit light, one or more speakers configured to emit sounds, one or more haptic devices configured to emit haptic effects, or any combination thereof (Paragraph 24, the feedback system receives the signals which activate certain responses since the feedback device also has LEDs, Haptics, and speakers, these would be activated when relevant), and the output instructions are configured to activate the one or more object emitters to cause the one or more light emitters to emit light, the one or more speakers to emit the sounds, the one or more haptic devices to emit the haptic effects, or any combination thereof (Paragraph 24, the feedback system receives the signals which activate certain responses since the feedback device also has LEDs, Haptics, and speakers, these would be activated when relevant).
In regards to claim 6, wherein the instructions are executable by the processing circuitry to cause the processing circuitry to provide the output instructions to the one or more special effect components in the interactive environment to generate the special effect outputs (Paragraph 24, the feedback system receives the signals which activate certain responses since the feedback device also has LEDs, Haptics, and speakers, these would be activated when relevant), and the one or more special effect components comprises a display, a light emitter, a speaker, a haptic device, or any combination thereof (Paragraphs 16 and 24, the feedback system receives the signals which activate certain responses since the feedback device also has LEDs, Haptics, and speakers, these would be activated when relevant plus paragraph 16 discloses a touchscreen display).
In regards to claim 10, it is similar to claim 1, and it is similarly rejected.
In regards to claim 15, Yeh discloses wherein generating the special effect outputs based on the motion of the target interactive object comprises sending the output instructions to activate the one or more object emitters to emit light, sounds, haptic effects, or any combination thereof (Paragraph 24, the feedback system receives the signals which activate certain responses since the feedback device also has LEDs, Haptics, and speakers, these would be activated when relevant).
In regards to claim 18, Yeh discloses an interactive object control system, comprising: a first interactive object of a plurality of interactive objects, the first interactive object comprising a first light emitter and a first radiofrequency identification tag circuitry (Paragraphs 26-28, These paragraphs disclose the usage of the first RFID tag and the first light emitter within a first object); a second interactive object of the plurality of interactive objects, the second interactive object comprising a second light emitter and a second radiofrequency identification tag circuitry (Paragraphs 29-31, These paragraphs detail a second RFID tag and second set of light emitters); and processing circuitry comprising one or more processors and memory storing instructions, executable by the processing circuitry to cause the processing circuitry to: detect a respective presence of the first interactive object in an interactive area based on receipt of a first signal from the first radiofrequency identification tag circuitry (Paragraphs 29-30 and 64-65, While these paragraphs do not directly recite that the first reader is associated with a particular object, the second reader is. The paragraphs 64-65 establish that combinations not disclosed but relating to the tracking are covered by the application. Simply making the first reader perform the process of the second reader seems to be well within those bounds. Further, a room would be within a BRI of interactive object); detect a respective presence of the second interactive object in the interactive area based on receipt of a second signal from the second radio frequency identification tag circuitry (Paragraphs 29-30, Paragraphs 29-30 directly disclose that the second reader is used to detect a presence of a respective object); determine a first signal strength of the first signal (Paragraph 28, Paragraph 28 establishes that the distance from the room/object or the number of hertz in a frequency which would be within the BRI of signal strength); determine a second signal strength of the second signal (Paragraph 30, Paragraph 30 establishes that the distance from the object or the number of hertz in a frequency which would be within the BRI of signal strength); and designate the first interactive object as a best candidate interactive object in response to the first signal strength being greater than the second signal strength (Paragraph 28, This paragraph details that a specific object can be chosen to be activated which would be within a BRI of best candidate objects and one could use the factors disclosed previously).
In regards to claim 19, Yeh discloses wherein the instructions are executable by the processing circuitry to cause the processing circuitry to send instructions to the first radiofrequency tag circuitry of the first interactive object to activate the first light emitter of the first interactive object in response to designating the first interactive object as the best candidate interactive object (Paragraph 28, This paragraph details that a specific object can be chosen to be activated which would be within a BRI of best candidate objects with the lights being activated).
In regards to claim 23, Yeh does disclose wherein the instructions; are executable by the processing circuitry to cause the processing circuitry to: instruct one or more emitters to emit light (Paragraph 24, discloses that the feedback device can consist of LEDs and speakers to provide feedback to the user when the objects are activated); and track the motion of the target interactive object using respective light reflection signals generated via reflection of the light by the target interactive object (Paragraphs 44-45, This paragraph discloses that the subject can be tracked via emitted light and the inclusion of a light detector would imply that it would be able to detect reflected light signals).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 7-8, 16, 20-22, and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Yeh et al. (US 20190220635 A1), hereinafter referred to as Yeh, in view of Barney et al. (US 20200001176 A1).
In regards to claim 7, Yeh does not disclose wherein the instructions are executable by the processing circuitry to cause the processing circuitry to access respective user profiles for the respective interactive objects of the plurality of interactive objects in the interactive environment based on the respective signals received at the communication circuitry.
However, Barney does disclose wherein the instructions are executable by the processing circuitry to cause the processing circuitry to access respective user profiles for the respective interactive objects of the plurality of interactive objects in the interactive environment based on the respective signals received at the communication circuitry (Paragraph 366, The reference discloses via example a user profile built around a character named Jimmy associated with the object which would be within a BRI of this claim).
It would be prima facie obvious to combine these references as it would be simple substitution. Paragraph 64 of Yeh discloses the usage of user profiles in a more broad and general sense than the claim requires. Barney’s implementation of user profiles is much more specific and detailed in how it interacts with the various users. As such, one could simply substitute the user profile system of Yeh with the user profile system from Barney. As such, it would be prima facie obvious.
In regards to claim 8, Barney discloses wherein the instructions are executable by the processing circuitry to cause the processing circuitry to select the best candidate interactive object from the plurality of interactive objects based on the respective signal strength of the respective signals and the respective user profiles (Paragraphs 165-167, Discloses that due to the performance of the gesture being accurate causes certain spells to be completed, but if not, a different sound effect is played. As shown in 166 and 167, the users have specific profiles that allow them to case certain spells and the “strength” of their gestures determines if the spell is done correctly).
In regards to claim 16, Barney discloses comprising accessing, using the one or more processors, respective user profiles for the respective interactive objects of the plurality of interactive objects in the interactive environment (Paragraphs 166-167, This discloses that the objects have various levels per user so the objects have the associated profiles); and selecting, using the one or more processors, the best candidate interactive object from the plurality of interactive objects based on the respective signal strength of the respective signals and the respective user profiles (Paragraphs 165-167, Discloses that due to the performance of the gesture being accurate causes certain spells to be completed, but if not, a different sound effect is played. As shown in 166 and 167, the users have specific profiles that allow them to case certain spells and the “strength” of their gestures determines if the spell is done correctly).
It would be prima facie obvious to combine these references as it would be simple substitution. Paragraph 64 of Yeh discloses the usage of user profiles in a more broad and general sense than the claim requires. Barney’s implementation of user profiles is much more specific and detailed in how it interacts with the various users. As such, one could simply substitute the user profile system of Yeh with the user profile system from Barney. As such, it would be prima facie obvious.
In regards to claim 20, Yeh discloses wherein the instructions are executable by the processing circuitry to cause the processing circuitry to: tag the best candidate interactive object as a target interactive object in response to receipt of a sensor signal from one or more sensors, wherein the sensor signal indicates detection of light emitted by the first light emitter of the first interactive object (Paragraphs 44-45, This paragraph discloses that the subject can be tracked via emitted light and the inclusion of a light detector would imply that it would be able to detect light emitted by a first light detector).
Yeh does not disclose and track motion of the target interactive object and to generate special effect outputs based on the motion of the target interactive object.
Barney does disclose and track motion of the target interactive object and to generate special effect outputs based on the motion of the target interactive object (Paragraph 159, An appropriate motion of the wand is used to determine the book hovering effect).
It would be prima facie obvious to combine these two arts as it would lead to a predictable increase in player engagement. More complex systems in a game setting can be overly arduous for players to deal with, but a major aspect of the game’s function is based around the use of a wand or motion captured object. As such, for implements like spellcasting as shown in Barney, being able to combine multiple spells together or accessing higher tiers of spells allows for greater player engagement as it provides an incentive for players to keep playing to further upgrade their various skills. As such, it would be prima facie obvious to do.
In regards to claim 21, Barney discloses wherein the respective signals are indicative of respective unique identifiers of the respective interactive objects of the plurality of interactive objects, and the instructions are executable by the processing circuitry to cause the processing circuitry to: use the respective unique identifier of the best candidate interactive object to direct the instructions to the best candidate interactive object to activate the one or more object emitters of the best candidate interactive object (Paragraph 109, Barney discloses that the ID can identify which spells that a user can use which would have the ID identify which spells can be used when the user has performed the correct gesture with those specific special effects).
In regards to claim 22, Barney does disclose wherein the instructions; are executable by the processing circuitry to cause the processing circuitry to: determine a performance of a gesture based on the motion of the interactive object (Paragraph 159, An appropriate motion of the wand is used to determine the book hovering effect); provide output instructions to the one or more object emitters and to the one or more special effect components in the interactive environment to generate the special effect outputs based on the performance of the gesture (Paragraph 159, An appropriate motion of the wand is used to determine the book hovering effect); and use the respective unique identifier to update a respective user profile based on the performance of the gesture (Paragraph 168, The wand performance requires the usage of certain gestures and the gaming information associated with the unique ID per wand would cover the user profile.).
In regards to claim 24, Barney discloses wherein the respective signals comprise passively-communicated radiofrequency signals (Paragraph 160, The disclosure of passive RFID technology is within one of the disclosed embodiments of this passively-communicated radiofrequency signals).
In regards to claim 25, Yeh does not explicitly disclose wherein the target interactive object comprises a sword, a wand, a token, a medallion, headgear, a figurine, a stuffed animal, a clothing item, a jewelry item, a portable object, or any combination thereof.
Barney does disclose wherein the target interactive object comprises a sword, a wand, a token, a medallion, headgear, a figurine, a stuffed animal, a clothing item, a jewelry item, a portable object, or any combination thereof (Paragraph 105, This paragraph discloses a number of examples with some directly overlapping such as jewelry or stuffed animals, but all would fall well within the recited portable objects).
Allowable Subject Matter
Claim 9 is not rejected under 35 U.S.C 102/103.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONOR AIDAN O'MALLEY whose telephone number is (571)272-0226. The examiner can normally be reached Monday - Friday 9:00 am. - 5:00 pm. EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 5722729523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CONOR AIDAN. O'MALLEY
Examiner
Art Unit 2675
/CONOR A O'MALLEY/Examiner, Art Unit 2675
/ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675