DETAILED ACTION
This office action is responsive to communication filed on November 19, 2025. Claims 1, 3-6 and 8-20 are pending in the application and have been examined by the Examiner.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 19, 2025 has been entered.
Terminal Disclaimer
The terminal disclaimer filed on October 20, 2025 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of US Patent No. 10,212,325 has been reviewed and is accepted. The terminal disclaimer has been recorded.
Response to Arguments
Applicant's arguments filed November 19, 2025 have been fully considered but they are not persuasive.
Applicant argues, with respect to claims 1, 6 and 20, that in Lee et al., the camera is attached to the subject performing the athletic maneuver, and is not from a point of view of a spectator. Since the point of view from a recording device attached to a subject performing an athletic movement is fundamentally different from the point of view from a spectator watching the performance of a plurality of actors, Lee cannot anticipate the inventions recited in claims 1, 6 and 20 and their dependent claims.
The Examiner respectfully disagrees. Paragraph 0104 of Lee et al. states “camera 404 may not be attached to the person (e.g., mounted to a windshield and facing the user)”. Paragraph 0106 of Lee et al. states “camera 404 attached to a flying device may fly close or approach the user, pull back or profile the user with a circular path”. Based upon at least these recitations, it is clear that the camera may be detached from the subject and capture images from a point of view of a spectator.
Applicant argues, with respect to claims 1, 6 and 20, that Lee et al. does not disclose the controlling of a camera based on predicted states that are about to occur.
The Examiner respectfully disagrees. The whole point of the invention of Lee et al. is to generate a highlight video clip (see paragraph 0003). In order to generate highlights, a camera direction and/or a camera zoom may be changed such that the user may be recorded in greater detail based upon a user’s motion signature (see paragraph 0106). For instance, a camera may not be initially pointed at a user and may be redirected toward the user based upon the user’s motion signature (see paragraph 0110). Since the camera is redirected toward the user upon detection of a motion signature such that the user is captured in greater detail in order to generate highlight clips, it is clear that the motion signature of the user is predictive that an athletic maneuver worthy of a highlight clip is about to occur. Additionally, Lee et al. teaches that an event time of a highlight may be based on a rate of change of sensor parameters “prior to a physical event” (see paragraph 0073). Therefore, Lee et al. teaches controlling the camera based on predicted states that are about to occur.
Applicant argues, with respect to claim 1, that since the point of view from a recording device attached to a subject performing an athletic movement is fundamentally different from the point of view from a spectator watching the performance of a plurality of actors, there is no reasonable expectation of success to modify the camera of Lee to arrive at an invention that can “identify an actor of focus from the plurality of actors” and/or “adjusting the camera, to focus on the actor among the plurality of actors”. Boyle discloses an automatic video recording system 10 configured to record sporting events involving multiple participants. The Boyle system and the Lee systems are fundamentally different from the points of view of their cameras. Further, neither Boyle nor Lee suggests a technique for predictive adjustment of the operations of a camera based on data generated by sensors attached to subjects/participants/actors, no combination of Boyle and Lee can lead to the inventions recited in the pending claims.
The Examiner respectfully disagrees. Lee et al. teaches that the point of view is from a point of view of a spectator, as discussed above. Therefore, the cameras of Lee et al. and Boyle et al. have the same points of view. Lee et al. teaches a predictive adjustment of the operations of the camera based on the data generated by the sensors, as discussed above.
How Lee et al. and Boyle et al. read on the other amended limitations of claims 1, 6 and 20 is discussed in the accompanying prior art rejections of these claims.
Therefore, the rejection is maintained by the Examiner.
Double Patenting
All double patenting rejections are hereby removed in view of the Terminal Disclaimer filed October 20, 2025.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 6, 8, 9 and 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lee et al. (US 2016/0225410).
Consider claim 6, Lee et al. teaches:
A method, comprising:
receiving, by a computing device having a camera (camera, 404, figure 4A) not attached to an actor (Paragraph 0104 of Lee et al. states “camera 404 may not be attached to the person (e.g., mounted to a windshield and facing the user)”. Paragraph 0106 of Lee et al. states “camera 404 attached to a flying device may fly close or approach the user, pull back or profile the user with a circular path”. Based upon at least these recitations, it is clear that the camera may be detached from the subject and capture images from a point of view of a spectator.), sensor data from one or more sensors (sensor, 406) attached to the actor (Sensor (406) is “worn by the user” (paragraph 0106), and transmits sensor parameters (i.e. sensor data) that are received by the camera (404), paragraphs 0106, 0109 and 0110.) among a plurality of actors having attached sensors in communication with the computing device (Paragraph 0119 states “each of cameras 504.1-504.N may be configured to receive one or more sensor parameter values from any suitable number of users' respective sensor devices”.);
predicting, by the computing device, a state of the actor that is about to occur (In order to generate highlights, a camera direction and/or a camera zoom may be changed such that the user may be recorded in greater detail based upon a user’s motion signature (see paragraph 0106). For instance, a camera may not be initially pointed at a user and may be redirected toward the user based upon the user’s motion signature (see paragraph 0110). Since the camera is redirected toward the user upon detection of a motion signature such that the user is captured in greater detail in order to generate highlight clips, it is clear that the motion signature of the user is predictive that an athletic maneuver (i.e. a state of the actor) worthy of a highlight clip is about to occur. Additionally, Lee et al. teaches that an event time of a highlight may be based on a rate of change of sensor parameters “prior to a physical event” (see paragraph 0073). Therefore, Lee et al. teaches controlling the camera based on predicted states that are about to occur.), among the plurality of actors, based on the sensor data received from the one or more sensors (e.g. determining whether the sensor parameters match a stored motion signature, paragraphs 0106, 0109 and 0110) attached to the actor (“worn by the user”, paragraph 0106); and
controlling the camera (404) based on the state of the actor predicted based on the sensor data (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110.), including:
determining an operation parameter of the camera based on the state predicted based on the sensor data; and adjusting the camera, to focus on the actor among the plurality of actors, based on the operation parameter (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110.).
Consider claim 8, and as applied to claim 6 above, Lee et al. further teaches that the operation parameter includes a zoom level of the camera (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110.).
Consider claim 9, and as applied to claim 6 above, Lee et al. further teaches that the operation parameter includes a direction of the camera (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110.).
Consider claim 15, and as applied to claim 6 above, Lee et al. further teaches that the one or more sensors attached to the actor are attached to a piece of athletic equipment of the actor (“fastened to a snowboarding or skiing equipment”, paragraph 0040).
Consider claim 16, and as applied to claim 6 above, Lee et al. further teaches that the one or more sensors attached to the actor include at least one of: a GPS device, an inertial sensor, a magnetic sensor, and a pressure sensor (e.g. accelerometers or magnetometers, paragraph 0033).
Consider claim 17, and as applied to claim 6 above, Lee et al. further teaches causing the camera to capture one or more images based on the controlling of the camera (“initiate recording video”, paragraph 0106).
Consider claim 18, and as applied to claim 17 above, Lee et al. further teaches overlaying data derived from the sensor data on the one or more images (See figure 3A. Generated information indicative of the state of the actor is presented in box 312 of the user interface 300, paragraphs 0083, 0086 and 0087.).
Consider claim 19, and as applied to claim 17 above, Lee et al. further teaches tagging the one or more images with information derived from the sensor data (see paragraphs 0044, 0045, 0050, 0051, 0057, 0107 and 0108); and
storing the tagged images in a database (memory unit, 12), wherein the tagged images are retrievable via the tagged information (see paragraphs 0044, 0045, 0051 and 0057 and 0065).
Consider claim 20, Lee et al. teaches:
A non-transitory computer-readable medium (memory unit, 112, figure 1, paragraph 0054 and 0055) storing instructions that, when executed by a computing device (camera, 102, figure 1, 404, figure 4), cause the computing device to perform a method, the method comprising:
receiving, by a computing device having a camera (camera, 404, figure 4A) not attached to an actor (Paragraph 0104 of Lee et al. states “camera 404 may not be attached to the person (e.g., mounted to a windshield and facing the user)”. Paragraph 0106 of Lee et al. states “camera 404 attached to a flying device may fly close or approach the user, pull back or profile the user with a circular path”. Based upon at least these recitations, it is clear that the camera may be detached from the subject and capture images from a point of view of a spectator.), sensor data from one or more sensors (sensor, 406) attached to the actor (Sensor (406) is “worn by the user” (paragraph 0106), and transmits sensor parameters (i.e. sensor data) that are received by the camera (404), paragraphs 0106, 0109 and 0110.) among a plurality of actors having attached sensors in communication with the computing device (Paragraph 0119 states “each of cameras 504.1-504.N may be configured to receive one or more sensor parameter values from any suitable number of users' respective sensor devices”.);
predicting, by the computing device, a state of the actor that is about to occur (In order to generate highlights, a camera direction and/or a camera zoom may be changed such that the user may be recorded in greater detail based upon a user’s motion signature (see paragraph 0106). For instance, a camera may not be initially pointed at a user and may be redirected toward the user based upon the user’s motion signature (see paragraph 0110). Since the camera is redirected toward the user upon detection of a motion signature such that the user is captured in greater detail in order to generate highlight clips, it is clear that the motion signature of the user is predictive that an athletic maneuver (i.e. a state of the actor) worthy of a highlight clip is about to occur. Additionally, Lee et al. teaches that an event time of a highlight may be based on a rate of change of sensor parameters “prior to a physical event” (see paragraph 0073). Therefore, Lee et al. teaches controlling the camera based on predicted states that are about to occur.), among the plurality of actors, based on the sensor data received from the one or more sensors (e.g. determining whether the sensor parameters match a stored motion signature, paragraphs 0106, 0109 and 0110) attached to the actor (“worn by the user”, paragraph 0106); and
controlling the camera (404) based on the state of the actor predicted based on the sensor data (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110.), including:
determining an operation parameter of the camera based on the state predicted based on the sensor data; and adjusting the camera, to focus on the actor among the plurality of actors, based on the operation parameter (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2016/0225410) in view of Boyle et al. (US 2013/0242105).
Consider claim 10, and as applied to claim 6 above, Lee et al. teaches multiple users (i.e. actors, paragraphs 0018 and 0119).
However, Lee et al. does not explicitly teach, as a whole, that the actor is a first actor; and the method further comprises: receiving, by the computing device, second sensor data from one or more sensors attached to a second actor; determining, by the computing device, a state of the second actor based on the second sensor data; comparing the state of the first actor and the state of the second actor; and selecting an operation parameter for the camera based on the comparing of the state of the first actor and the state of the second actor.
Boyle et al. similarly teaches:
A system (figure 2) comprising:
a plurality of processing units (remote devices, 15, 16, 17) associated with a plurality of actors (“multiple subjects”) participating in an activity (paragraph 0023), wherein each respective actor of the actors having one or more sensors (e.g. global positioning antennas, paragraphs 0019 and 0025) in communication with a respective processing unit among the plurality of processing units (paragraphs 0019 and 0025), the respective processing unit configured to process sensor data from the one or more sensors to identify a state of the respective actor (i.e. the location of the actor, paragraph 0025);
at least one interface to communicate with the processing units (15, 16, 17, see paragraph 0023, figure 2), receive from the processing units data identifying states of the actors determined from sensors attached to the actors (i.e. the locations of the actors, paragraphs 0024 and 0025), and communicate with a camera (base station, 18, positioner, 32, camera, 46, paragraph 0023);
at least one microprocessor (Base station (18) is a computer, paragraph 0019.); and
a memory storing instructions configured to instruct the at least one microprocessor (The base station (18) is “programmed” as detailed in paragraph 0024.) to
select an actor from the plurality of actors based on the states of the actors in (As detailed in paragraph 0024, when there are a plurality of actors (i.e. wearing remote devices, 16), one specific remote device at a time is selected for tracking by one camera (46).); and
adjust, via the at least one interface, an operation parameter of the camera to focus on the actor selected, based on the data identifying the states of the actors from the plurality of actors (The focus, zoom and/or pointing direction of the camera is adjusted according to the locations of the actors, paragraphs 0021, 0024, 0027 and 0034.).
However, Boyle et al. additionally teaches that the actor is a first actor; and the method further comprises: receiving, by the computing device, second sensor data from one or more sensors attached to a second actor; determining, by the computing device, a state of the second actor based on the second sensor data; comparing the state of the first actor and the state of the second actor; and selecting an operation parameter for the camera based on the comparing of the state of the first actor and the state of the second actor (See paragraphs 0023 and 0024, figure 2. There are a plurality of actors wearing a plurality of sensors (i.e. in remote devices 15, 16 and 17). The locations (i.e. states) of the actors are compared and the camera is adjusted to follow the actor with the closest location to the camera.).
Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention (pre-AIA ) or before the effective filing date of the claimed invention (AIA ) to have the method taught by Lee et al. include selecting an operation parameter for the camera in the manner taught by Boyle et al. for the benefit of enabling a desired performance to be recorded and viewed (Boyle et al., paragraph 0002).
Consider claim 11, and as applied to claim 6 above, Lee et al. does not explicitly teach the comparing of the states of the first and second actors.
Boyle et al. similarly teaches:
A system (figure 2) comprising:
a plurality of processing units (remote devices, 15, 16, 17) associated with a plurality of actors (“multiple subjects”) participating in an activity (paragraph 0023), wherein each respective actor of the actors having one or more sensors (e.g. global positioning antennas, paragraphs 0019 and 0025) in communication with a respective processing unit among the plurality of processing units (paragraphs 0019 and 0025), the respective processing unit configured to process sensor data from the one or more sensors to identify a state of the respective actor (i.e. the location of the actor, paragraph 0025);
at least one interface to communicate with the processing units (15, 16, 17, see paragraph 0023, figure 2), receive from the processing units data identifying states of the actors determined from sensors attached to the actors (i.e. the locations of the actors, paragraphs 0024 and 0025), and communicate with a camera (base station, 18, positioner, 32, camera, 46, paragraph 0023);
at least one microprocessor (Base station (18) is a computer, paragraph 0019.); and
a memory storing instructions configured to instruct the at least one microprocessor (The base station (18) is “programmed” as detailed in paragraph 0024.) to
select an actor from the plurality of actors based on the states of the actors in (As detailed in paragraph 0024, when there are a plurality of actors (i.e. wearing remote devices, 16), one specific remote device at a time is selected for tracking by one camera (46).); and
adjust, via the at least one interface, an operation parameter of the camera to focus on the actor selected, based on the data identifying the states of the actors from the plurality of actors (The focus, zoom and/or pointing direction of the camera is adjusted according to the locations of the actors, paragraphs 0021, 0024, 0027 and 0034.).
However, Boyle et al. further teaches that the selecting of the operation parameter includes: selecting the first actor from a plurality of actors including the first actor and the second actor, based on comparing the state of the first actor and the state of the second actor; and determining the operation parameter based on an identification of the first actor selected based on the comparing (See paragraphs 0023 and 0024, figure 2. There are a plurality of actors wearing a plurality of sensors (i.e. in remote devices 15, 16 and 17). The locations (i.e. states) of the actors are compared and the camera is adjusted to follow the actor (e.g. the first actor) with the closest location to the camera. The first actor is identified based upon a specific transmitted code, paragraph 0024.).
Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention (pre-AIA ) or before the effective filing date of the claimed invention (AIA ) to have the method taught by Lee et al. include selecting an operation parameter for the camera in the manner taught by Boyle et al. for the benefit of enabling a desired performance to be recorded and viewed (Boyle et al., paragraph 0002).
Consider claim 12, and as applied to claim 11 above, Lee et al. further teaches that the operation parameter is selected to increase a percentage of an image of the first actor within an image captured by the camera (“higher zoom level”, paragraph 0109).
Boyle et al. also teaches that the operation parameter is selected to increase a percentage of an image of the first actor within an image captured by the camera (The camera is adjusted to follow the actor (e.g. the first actor) with the closest location to the camera, paragraphs 0016 and 0024. As such, the fist actor will occupy an increased percentage of an image. See, for instance, the actor (i.e. first actor) wearing remote device 16 in figure 2 compared to the actor (i.e. second actor) wearing remote device 15 in figure 2.).
Consider claim 13, and as applied to claim 12 above, Lee et al. does not explicitly teach that the operation parameter is selected to reduce a percentage of an image of the second actor within the image captured by the camera.
Boyle et al. further teaches that the operation parameter is selected to reduce a percentage of an image of the second actor within the image captured by the camera (The camera is adjusted to follow the actor (e.g. the first actor) with the closest location to the camera, paragraphs 0016 and 0024. As such, the second actor will occupy a decreased percentage of an image. See, for instance, the actor (i.e. first actor) wearing remote device 16 in figure 2 compared to the actor (i.e. second actor) wearing remote device 15 in figure 2.).
Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention (pre-AIA ) or before the effective filing date of the claimed invention (AIA ) to have the method taught by Lee et al. include selecting an operation parameter for the camera in the manner taught by Boyle et al. for the benefit of enabling a desired performance to be recorded and viewed (Boyle et al., paragraph 0002).
Consider claim 14, and as applied to claim 11 above, Lee et al. further teaches that an operation parameter is selected based on a location of a first actor (The camera direction is changed to record video of the user in greater detail, paragraph 0106.).
Boyle et al. also teaches that the operation parameter is selected based on a location of the first actor (The camera is adjusted to follow the actor (e.g. the first actor) with the closest location to the camera, paragraphs 0016 and 0024.).
Claims 1 and 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2016/0225410) in view of Boyle et al. (US 2013/0162852).
Consider claim 1, Lee et al. teaches:
A system (see figures 1 and 4A) comprising:
a camera (camera, 404, figure 4A, paragraph 0104);
a plurality of processing units (external sensors 126.1, 126.2, 126.N, figure 1, 406, figure 4, paragraphs 0034, 0035 and 0104) associated with a plurality of actors participating in an activity (i.e. a plurality of users, paragraphs 0102 and 0118), wherein each respective actor of the actors having one or more sensors in communication with a respective processing unit among the plurality of processing units, the respective processing unit configured to process sensor data from the one or more sensors to identify a state of the respective actor (The sensors (126, 406) measure (i.e. via one or more sensors), generate (i.e. process), store (i.e. process) and transmit (i.e. process) one or more sensor parameters, paragraphs 0036-0038, 0106 and 0109. The sensor parameters identify a state of a respective actor. Because the sensors (126, 406) perform both the measuring and the processing, the one or more sensors must be in communication with the respective processing unit. Respective users have respective cameras and sensors, as detailed in paragraph 0118.);
at least one interface coupled with the camera (404) and configured to communicate with the processing units (126.1, 126.2, 126.N, 406), to receive from the processing units (126.1, 126.2, 126.N, 406) data generated by sensors attached to the actors (A communication unit (120) communicates over an interface to receive sensor parameters (i.e. identifying states of the actors) from the processing units, paragraphs 0029, 0031, 0104, 0106, 0109 and 0110.), and to communicate with the camera (A CPU (104) performs various acts in accordance with applicable embodiments described in Lee et al. (paragraph 0055). The CPU (104) controls a camera (camera unit, 124, paragraph 0056), and thus must communicate with the camera (124) over an interface.);
at least one microprocessor (CPU (104), paragraph 0054); and
a memory (memory unit, 112) storing instructions configured to instruct the at least one microprocessor (104, see paragraphs 0054 and 0055) to:
predict, based on the data received from the processing units, states of the actors that are about to occur (In order to generate highlights, a camera direction and/or a camera zoom may be changed such that the user may be recorded in greater detail based upon a user’s motion signature (see paragraph 0106). For instance, a camera may not be initially pointed at a user and may be redirected toward the user based upon the user’s motion signature (see paragraph 0110). Since the camera is redirected toward the user upon detection of a motion signature such that the user is captured in greater detail in order to generate highlight clips, it is clear that the motion signature of the user is predictive that an athletic maneuver (i.e. a state of the actor) worthy of a highlight clip is about to occur. Additionally, Lee et al. teaches that an event time of a highlight may be based on a rate of change of sensor parameters “prior to a physical event” (see paragraph 0073). Therefore, Lee et al. teaches controlling the camera based on predicted states that are about to occur. The prediction includes determining whether the sensor parameters match a stored motion signature, paragraphs 0106, 0109 and 0110.);
adjust, via the at least one interface, an operation parameter of the camera (124), based on identification, based on the states of the actors predicted from the data generated by the sensors attached to the actors (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110).
However, Lee et al. does not explicitly teach that the instructions are further configured to instruct the at least one microprocessor to identify an actor of focus from the plurality of actors based on the states of the actors.
Boyle et al. similarly teaches a system (figure 1) comprising a processing unit (remote device, 16, paragraphs 0042 and 0064-0070) worn by an actor (i.e. by a subject, paragraphs 0042, 0043 and 0068), and an orientation controller (70, paragraphs 0044 and 0071) including a processor (CPU, 30, paragraph 0071), wherein the orientation controller (70) controls a camera (46, paragraphs 0045 and 0071) based upon a state of the actor determined from the processing unit (16, see paragraphs 0046, 0060, 0102, 0103 and 0118-0125).
However, Boyle et al. additionally teaches identifying an actor of focus from a plurality of actors based on the states of the actors (“For example, one selection scheme may be based on target velocity such that the target with the highest detectable speed is tracked. This method is applicable, for example, when multiple surfers are in the ocean, wearing remote devices that communicate with the same base station. The surfer who moves the fastest will be tracked.” see paragraphs 0134 and 0131).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have the instructions configured to instruct the at least one microprocessor taught by Lee et al. identify an actor of focus from a plurality of actors based on the states of the actors as taught by Boyle et al. for the benefit of enabling tracking of an actor most likely to be riding a wave (Boyle et al., paragraph 0134).
Consider claim 3, and as applied to claim 1 above, Lee et al. further teaches that the operation parameter is one of: a zoom level of the camera, and a direction of the camera (e.g. to cause the camera to change a zoom level, initiate recording of video, and/or change a camera direction, paragraphs 0056, 0106, 0109 and 0110.).
Consider claim 4, and as applied to claim 1 above, Lee et al. does not explicitly teach selecting the actor of focus from the plurality of actors.
Boyle et al. further teaches that the actor of focus is selected based at least in part on performances of the plurality of actors measured using the sensors attached to the actors (“For example, one selection scheme may be based on target velocity such that the target with the highest detectable speed is tracked. This method is applicable, for example, when multiple surfers are in the ocean, wearing remote devices that communicate with the same base station. The surfer who moves the fastest will be tracked.” see paragraphs 0134 and 0131).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have the instructions configured to instruct the at least one microprocessor taught by Lee et al. select the actor of focus based at least in part on performances of the plurality of actors measured using the sensors attached to the actors as taught by Boyle et al. for the benefit of enabling tracking of an actor most likely to be riding a wave (Boyle et al., paragraph 0134).
Consider claim 5, and as applied to claim 1 above, Lee et al. further teaches predicting that an actor is about to perform an action of interest (i.e. via sensor parameters matching a motion signature, paragraphs 0047, 0048, 0071, 0072, 0106, 0109 and 0110).
Lee et al. does not explicitly teach selecting the actor of focus from the plurality of actors.
Boyle et al. further teaches that the actor of focus is selected based at least in part on a prediction, based on the states of the actors, that the actor of focus is about to perform an action of interest (i.e. based on a prediction that the actor of focus is about to ride a wave, paragraph 0134).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have the instructions configured to instruct the at least one microprocessor taught by Lee et al. select the actor of focus based upon a prediction as taught by Boyle et al. for the benefit of enabling tracking of an actor most likely to be riding a wave (Boyle et al., paragraph 0134).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALBERT H CUTLER whose telephone number is (571)270-1460. The examiner can normally be reached approximately Mon - Fri 8:00-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at (571)272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALBERT H CUTLER/Primary Examiner, Art Unit 2637