DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action is in response to Preliminary Amendments filed on 03/10/2025, for application number 18/902,252 filed on 09/30/2024, in which claims 1-20 were originally presented for examination.
Claims 1, 2, 6, and 7 are amended.
Claims 8-20 are canceled.
Claims 21-33 are new.
Claims 1-7 and 21-33 are currently pending in this application.
Claim Objections
Claim 6 is objected to because of the following informalities:
The limitation “the user ID that and are based upon” of claim 6 is grammatically incorrect. Appropriate correction is required.
Claim 32 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 33. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim 33 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 32. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-7 and 21-33 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1 and 29 recite and claim 21 similarly recites “relate preferences of the user with the user ID” and “the temporal attribute and the spatial attribute associated with the user ID of the user.” However, the limitations were not described in the specification as originally filed.
Regarding the limitation “relate preferences of the user with the user ID,” the specification states that an authentication component authenticates a user associated with client computing platforms (e.g., a cellular telephone, a smartphone, a digital camera, a laptop, a tablet computer, a desktop computer, a television set-top box, a smart TV, a gaming console, or the like) to provide access to a repository of images and/or video segments, and a user ID is associated with the client computing platforms (see para. [0012]-[0013] and [0021]-[0025] of the specification). Preferences for flight control settings is determined based on obtained set of flight control settings associated with a video segment (see para. [0010]-[0011] of the specification). A system may determine the preferences for the flight control settings of an unmanned aerial vehicle associated with the user based on the obtained set of flight control settings associated with the video segments which the user has a preference for (para. [0057] of the specification).
However, nowhere in the specification states relating preferences of the user with the user ID. The user ID is merely stated as a method for authenticating the user to permit access to the repository (see para. [0013] of the specification). Moreover, it is well known in the art that information can be associated with a user without a user ID using temporary tokens, cookies, local storage, or the like.
Therefore, the limitation “relate preferences of the user with the user ID” is not supported by the original disclosure.
Regarding the limitation “the temporal attribute and the spatial attribute associated with the user ID of the user,” the specification states that contextual information associated with captured video segments may define temporal attributes and/or spatial attributes associated with the video segments (see para. [0011] of the specification). Current temporal attributes and/or spatial attributes are associated with an unmanned aerial vehicle and current flight control settings of the unmanned aerial vehicle (see para. [0042]-[0049] of the specification).
However, nowhere in the specification states the temporal attribute and the spatial attribute associated with the user ID. The user ID is merely stated as a method for authenticating the user to permit access to the repository (see para. [0013] of the specification).
Therefore, the limitation “the temporal attribute and the spatial attribute associated with the user ID of the user” is not supported by the original disclosure.
Claim 6 recites “the flight control settings are associated with the user ID that and are based upon the images and/or the video segments viewed by the user associated with the user ID.” However, the limitation was not described in the specification as originally filed.
The specification states that preferences for flight control settings are associated with a user who consumed a video segment (see para. [0041] of the specification).
However, nowhere in the specification states the flight control settings are associated with the user ID. The user ID is merely stated as a method for authenticating the user to permit access to the repository (see para. [0013] of the specification).
Therefore, the limitation “the flight control settings are associated with the user ID that and are based upon the images and/or the video segments viewed by the user associated with the user ID” is not supported by the original disclosure.
Claims 2-5, 7, 22-28, and 30-33 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as being dependent on rejected claim(s) and for failing to cure the deficiencies listed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 5, 7, 21-23, 25, 27-30, 32, and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Beard et al. (US 9,769,387 B1, hereinafter “Beard”) in view of Yahata et al. (US 2014/0289818 A1, hereinafter “Yahata”).
Regarding claim 1, Beard discloses a system comprising an unmanned aerial vehicle including image sensors (Beard at col. 3, ln. 9-12: “an action camera system 200 incorporated aboard a UAV 100 determines trackable objects and selects one as its target, either automatically or remotely according to user input”), and one or more physical computer processors configured by computer readable instructions to (Beard at col. 7, ln. 29-32: “image processor 234 analyzes the stream of images captured by camera 210 and image sensor 212 to select a target 118 and determine optimal and current orientations of the target 118 to the UAV 100”):
capture images and/or video segments using the image sensors (Beard at col. 3, ln. 31-33: “the action camera system 200 enables the capture of real-time streaming video from multiple unique perspectives”); relate a temporal attribute with the images and/or the video segments captured (Beard at col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”; The system must relate time to the images or the video segments plot the possible trajectory at the given future time); relate a spatial attribute with the images and/or the video segments captured (Beard at col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”; The system must relate the position data to the images or the video segments to plot the possible trajectory at the given future time); and transmit control instructions to the unmanned aerial vehicle based upon the preferences of the user that include: the temporal attribute and the spatial attribute (Beard at Abstract: “The action camera system may additionally provide preselected modes of operation that control UAV movement and image capture depending on the user's desired objectives”; col. 2, ln. 4-8: “the action camera system captures a reference image via an onboard camera, the images defining a desired orientation of the target to the UAV and including image elements corresponding to the target and to a pattern uniquely associated with the target”; col.4, ln. 60-63: “action camera system 200 directs the attitude control system 232 to adjust the speed of one or more rotors 208, thereby adjusting the speed, direction, or rotational orientation of UAV 100”; col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”).
However, Beard does not explicitly state:
create a user ID for a user when the user completes registration;
relate preferences of the user with the user ID;
the temporal attribute and the spatial attribute associated with the user ID of the user.
In the same filed of endeavor, Yahata teaches:
create a user ID for a user when the user completes registration (Yahata at para. [0063]: “group establishment may be performed by generating group information in which a group ID identifying a newly established group and a user ID are associated with each other, as shown in FIG. 4”); relate preferences of the user with the user ID (Yahata at para. [0073]: “It should be noted that the user identified by the user ID associated with the video information in the management information has the right of management of the current video information”; para. [0056]: “a video management system 100 includes groups A and B into which users A to B are divided. Thus, the video management system 100 can make a setting separately for each of the groups, for viewing or usage of the uploaded video information (moving image)”);
the temporal attribute and the spatial attribute associated with the user ID of the user (Yahata at para. [0042]: “when the user uploading the video information belongs to a plurality of groups, the video information may be further associated in the associating with the user ID and a group ID identifying a group selected by the user”; para. [0046]: “the video information may be further associated in the associating with a location and a time where the video information is captured”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Beard by adding the user ID of Yahata with a reasonable expectation of success. The motivation to modify the system of Beard in view of Yahata is to provide efficient management of image data.
Regarding claim 2, Beard in view of Yahata teaches the system of claim 1.
Yahata further teaches wherein the user ID is associated with a social network service or a messaging service (Yahata at para. [0007]: “a video management method according to an aspect of the present invention is a video management method of managing video information which is uploaded to a server by a user belonging to a group including a virtual administrator and a plurality of users and which is viewable by at least one different user belonging to the group via the Internet using an information terminal”; para. [0061]: “the group is set based on the mailing list. Thus, whenever the user posts a moving image, a message that a new moving image has been posted may be distributed to this mailing list”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Beard in view of Yahata by adding the social network or the messaging service of Yahata with a reasonable expectation of success. The motivation to modify the system of Beard in view of Yahata is to provide efficient management of image data.
Regarding claim 5, Beard in view of Yahata teaches the system of claim 1.
Beard further discloses further comprising:
a preferences component configured to determine preferences of flight control settings of the unmanned aerial vehicle based upon a first set of flight control settings and a second set of flight control settings (Beard at col. 4, ln. 64- col. 5, ln. 1: “a desired orientation includes a set of parameters representing an ideal position of the UAV 100 relative to target 118, from which perspective the action camera system 200 can provide streaming video images of the target 118 in a given environment”; col. 5, ln. 9-15: “preprogrammed modes include information about suggested camera orientations, tracking distances, movement sequences ( e.g., a continuous shot of a target from a UAV revolving around the target at a given distance) or cinematographic settings ( e.g., frame rates, frame speeds, likely lighting conditions, etc.)”; col. 5, ln. 31-35: “an orientation includes both absolute parameters (information about the absolute position of the UAV 100, e.g., relative to true north) and relative parameters (information about the position of UAV 100 relative to a selected target 118)”).
Regarding claim 7, Beard in view of Yahata teaches the system of claim 1.
Beard further discloses further comprising:
a transmission component configured to transmit the control instructions to the unmanned aerial vehicle in real-time or near real-time in response to receiving current contextual information associated with detection of a current one of the images and/or the video segments (Beard at col. 2, ln. 29-32: “the action camera system includes an imaging processor connected to the camera, for processing the images and received position data and controlling the UAV based on the image and data processing”; col. 4, ln. 48-53: “the action camera system 200 may use orientation information derived from a reference image 110 in addition to position data provided by smartphone 140, or by attitude control sensors 218, to establish a desired angle of elevation and a desired bearing relative to target 118”; col. 4, ln. 60-63: “action camera system 200 directs the attitude control system 232 to adjust the speed of one or more rotors 208, thereby adjusting the speed, direction, or rotational orientation of UAV 100”).
Regarding claim 21, Beard in view of Yahata teaches the system of claim 1.
Beard further discloses wherein the spatial attributes comprise: a geolocation attribute, a date attribute and/or a content attribute (Beard at col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”).
Regarding claim 22, Beard discloses a method comprising:
detecting images and/or video segments with image sensors of an unmanned aerial vehicle (Beard at col. 3, ln. 9-12: “an action camera system 200 incorporated aboard a UAV 100 determines trackable objects and selects one as its target, either automatically or remotely according to user input”; col. 3, ln. 31-33: “the action camera system 200 enables the capture of real-time streaming video from multiple unique perspectives”);
processing the images and/or video segments with one or more physical computer processors (Beard at col. 7, ln. 29-32: “image processor 234 analyzes the stream of images captured by camera 210 and image sensor 212 to select a target 118 and determine optimal and current orientations of the target 118 to the UAV 100”);
relating a temporal attribute with the images and/or the video segments detected (Beard at col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”; The system must relate time to the images or the video segments plot the possible trajectory at the given future time);
relating a spatial attribute with the images and/or the video segments captured (Beard at col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”; The system must relate the position data to the images or the video segments to plot the possible trajectory at the given future time); and
transmitting control instructions to the unmanned aerial vehicle based upon the preferences of the user that include the temporal attribute and the spatial attribute a(Beard at Abstract: “The action camera system may additionally provide preselected modes of operation that control UAV movement and image capture depending on the user's desired objectives”; col. 2, ln. 4-8: “the action camera system captures a reference image via an onboard camera, the images defining a desired orientation of the target to the UAV and including image elements corresponding to the target and to a pattern uniquely associated with the target”; col.4, ln. 60-63: “action camera system 200 directs the attitude control system 232 to adjust the speed of one or more rotors 208, thereby adjusting the speed, direction, or rotational orientation of UAV 100”; col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”).
However, Beard does not explicitly state:
creating a user ID and/or other identifying information for a user and/or a consumer when the user and/or the consumer registers the unmanned aerial vehicle;
relating preferences of the user with the user ID;
the temporal attribute and the spatial attribute associated with the user ID of the user.
In the same filed of endeavor, Yahata teaches:
creating a user ID and/or other identifying information for a user and/or a consumer when the user and/or the consumer registers the unmanned aerial vehicle (Yahata at para. [0063]: “group establishment may be performed by generating group information in which a group ID identifying a newly established group and a user ID are associated with each other, as shown in FIG. 4”);
relating preferences of the user with the user ID (Yahata at para. [0073]: “It should be noted that the user identified by the user ID associated with the video information in the management information has the right of management of the current video information”; para. [0056]: “a video management system 100 includes groups A and B into which users A to B are divided. Thus, the video management system 100 can make a setting separately for each of the groups, for viewing or usage of the uploaded video information (moving image)”);
the temporal attribute and the spatial attribute associated with the user ID of the user (Yahata at para. [0042]: “when the user uploading the video information belongs to a plurality of groups, the video information may be further associated in the associating with the user ID and a group ID identifying a group selected by the user”; para. [0046]: “the video information may be further associated in the associating with a location and a time where the video information is captured”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Beard by adding the user ID of Yahata with a reasonable expectation of success. The motivation to modify the method of Beard in view of Yahata is to provide efficient management of image data.
Regarding claim 23, Beard in view of Yahata teaches the method of claim 22.
Yahata further teaches wherein the user ID is associated with a social network service or a messaging service (Yahata at para. [0007]: “a video management method according to an aspect of the present invention is a video management method of managing video information which is uploaded to a server by a user belonging to a group including a virtual administrator and a plurality of users and which is viewable by at least one different user belonging to the group via the Internet using an information terminal”; para. [0061]: “the group is set based on the mailing list. Thus, whenever the user posts a moving image, a message that a new moving image has been posted may be distributed to this mailing list”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Beard in view of Yahata by adding the social network or the messaging service of Yahata with a reasonable expectation of success. The motivation to modify the method of Beard in view of Yahata is to provide efficient management of image data.
Regarding claim 25, Beard in view of Yahata teaches the method of claim 22.
Beard further discloses further comprising:
determining preferences of flight control settings of the unmanned aerial vehicle, with a preferences component, based upon a first set of flight control settings and a second set of flight control settings (Beard at col. 4, ln. 64- col. 5, ln. 1: “a desired orientation includes a set of parameters representing an ideal position of the UAV 100 relative to target 118, from which perspective the action camera system 200 can provide streaming video images of the target 118 in a given environment”; col. 5, ln. 9-15: “preprogrammed modes include information about suggested camera orientations, tracking distances, movement sequences ( e.g., a continuous shot of a target from a UAV revolving around the target at a given distance) or cinematographic settings ( e.g., frame rates, frame speeds, likely lighting conditions, etc.)”; col. 5, ln. 31-35: “an orientation includes both absolute parameters (information about the absolute position of the UAV 100, e.g., relative to true north) and relative parameters (information about the position of UAV 100 relative to a selected target 118)”).
Regarding claim 27, Beard in view of Yahata teaches the method of claim 22.
Beard further discloses further comprising:
transmitting instructions via a transmission component to the unmanned aerial vehicle in real-time or near real-time in response to receiving current contextual information associated with a detection of a current one of the images and/or the video segments (Beard at col. 2, ln. 29-32: “the action camera system includes an imaging processor connected to the camera, for processing the images and received position data and controlling the UAV based on the image and data processing”; col. 4, ln. 48-53: “the action camera system 200 may use orientation information derived from a reference image 110 in addition to position data provided by smartphone 140, or by attitude control sensors 218, to establish a desired angle of elevation and a desired bearing relative to target 118”; col. 4, ln. 60-63: “action camera system 200 directs the attitude control system 232 to adjust the speed of one or more rotors 208, thereby adjusting the speed, direction, or rotational orientation of UAV 100”).
Regarding claim 28, Beard in view of Yahata teaches the method of claim 27.
Beard further discloses wherein the instructions include preferences of flight control settings of the unmanned aerial vehicle (Beard at col. 2, ln. 29-32: “the action camera system includes an imaging processor connected to the camera, for processing the images and received position data and controlling the UAV based on the image and data processing”; col. 4, ln. 48-53: “the action camera system 200 may use orientation information derived from a reference image 110 in addition to position data provided by smartphone 140, or by attitude control sensors 218, to establish a desired angle of elevation and a desired bearing relative to target 118”; col. 4, ln. 60-63: “action camera system 200 directs the attitude control system 232 to adjust the speed of one or more rotors 208, thereby adjusting the speed, direction, or rotational orientation of UAV 100”).
Regarding claim 29, Beard discloses a non-transitory computer-readable storage medium, comprising processor- executable routines that, when executed by a processor, facilitate a performance of operations at an unmanned aerial vehicle (Beard at col. 3, ln. 9-12: “an action camera system 200 incorporated aboard a UAV 100 determines trackable objects and selects one as its target, either automatically or remotely according to user input”; col. 7, ln. 51-53: “the action camera system 200 includes onboard data storage and memory 206 as well as removable data storage and memory units 224”), the operations comprising operations to:
detect images and/or video segments with image sensors of the unmanned aerial vehicle (Beard at col. 3, ln. 31-33: “the action camera system 200 enables the capture of real-time streaming video from multiple unique perspectives”);
relate a temporal attribute with the images and/or the video segments captured (Beard at col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”; The system must relate time to the images or the video segments plot the possible trajectory at the given future time);
relate a spatial attribute with the images and/or the video segments captured (Beard at col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”; The system must relate the position data to the images or the video segments to plot the possible trajectory at the given future time); and
transmit control instructions to the unmanned aerial vehicle based upon the preferences of the user that include the temporal attribute and the spatial attribute (Beard at Abstract: “The action camera system may additionally provide preselected modes of operation that control UAV movement and image capture depending on the user's desired objectives”; col. 2, ln. 4-8: “the action camera system captures a reference image via an onboard camera, the images defining a desired orientation of the target to the UAV and including image elements corresponding to the target and to a pattern uniquely associated with the target”; col.4, ln. 60-63: “action camera system 200 directs the attitude control system 232 to adjust the speed of one or more rotors 208, thereby adjusting the speed, direction, or rotational orientation of UAV 100”; col. 7, ln. 2-6: “the action camera system 200 may use current and previous position data to plot a possible trajectory 144b for skier 118, assess the velocity or acceleration of skier 118 (and direct UAV 100 to match it), or determine the position of skier 118 at a given future time”).
However, Beard does not explicitly state:
create a user ID and/or other identifying information for a user and/or a consumer when the user and/or the consumer registers;
relate preferences of the user with the user ID;
the temporal attribute and the spatial attribute associated with the user ID of the user.
In the same filed of endeavor, Yahata teaches:
create a user ID and/or other identifying information for a user and/or a consumer when the user and/or the consumer registers (Yahata at para. [0063]: “group establishment may be performed by generating group information in which a group ID identifying a newly established group and a user ID are associated with each other, as shown in FIG. 4”);
relate preferences of the user with the user ID (Yahata at para. [0073]: “It should be noted that the user identified by the user ID associated with the video information in the management information has the right of management of the current video information”; para. [0056]: “a video management system 100 includes groups A and B into which users A to B are divided. Thus, the video management system 100 can make a setting separately for each of the groups, for viewing or usage of the uploaded video information (moving image)”);
the temporal attribute and the spatial attribute associated with the user ID of the user (Yahata at para. [0042]: “when the user uploading the video information belongs to a plurality of groups, the video information may be further associated in the associating with the user ID and a group ID identifying a group selected by the user”; para. [0046]: “the video information may be further associated in the associating with a location and a time where the video information is captured”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the medium of Beard by adding the user ID of Yahata with a reasonable expectation of success. The motivation to modify the medium of Beard in view of Yahata is to provide efficient management of image data.
Regarding claim 30, Beard in view of Yahata teaches the non-transitory computer-readable storage medium of claim 29.
Yahata further teaches wherein the user ID is associated with a social network service or a messaging service (Yahata at para. [0007]: “a video management method according to an aspect of the present invention is a video management method of managing video information which is uploaded to a server by a user belonging to a group including a virtual administrator and a plurality of users and which is viewable by at least one different user belonging to the group via the Internet using an information terminal”; para. [0061]: “the group is set based on the mailing list. Thus, whenever the user posts a moving image, a message that a new moving image has been posted may be distributed to this mailing list”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the medium of Beard in view of Yahata by adding the social network or the messaging service of Yahata with a reasonable expectation of success. The motivation to modify the medium of Beard in view of Yahata is to provide efficient management of image data.
Regarding claim 32, Beard in view of Yahata teaches the non-transitory computer-readable storage medium of claim 30.
Beard further discloses wherein the processor further comprises:
a preferences component configured to determine preferences of flight control settings of the unmanned aerial vehicle based upon a first set of flight control settings and a second set of flight control settings (Beard at col. 4, ln. 64- col. 5, ln. 1: “a desired orientation includes a set of parameters representing an ideal position of the UAV 100 relative to target 118, from which perspective the action camera system 200 can provide streaming video images of the target 118 in a given environment”; col. 5, ln. 9-15: “preprogrammed modes include information about suggested camera orientations, tracking distances, movement sequences ( e.g., a continuous shot of a target from a UAV revolving around the target at a given distance) or cinematographic settings ( e.g., frame rates, frame speeds, likely lighting conditions, etc.)”; col. 5, ln. 31-35: “an orientation includes both absolute parameters (information about the absolute position of the UAV 100, e.g., relative to true north) and relative parameters (information about the position of UAV 100 relative to a selected target 118)”).
Regarding claim 33, Beard in view of Yahata teaches the non-transitory computer-readable storage medium of claim 30.
Beard further discloses wherein the processor further comprises:
a preferences component configured to determine preferences of flight control settings of the unmanned aerial vehicle based upon a first set of flight control settings and a second set of flight control settings (Beard at col. 4, ln. 64- col. 5, ln. 1: “a desired orientation includes a set of parameters representing an ideal position of the UAV 100 relative to target 118, from which perspective the action camera system 200 can provide streaming video images of the target 118 in a given environment”; col. 5, ln. 9-15: “preprogrammed modes include information about suggested camera orientations, tracking distances, movement sequences ( e.g., a continuous shot of a target from a UAV revolving around the target at a given distance) or cinematographic settings ( e.g., frame rates, frame speeds, likely lighting conditions, etc.)”; col. 5, ln. 31-35: “an orientation includes both absolute parameters (information about the absolute position of the UAV 100, e.g., relative to true north) and relative parameters (information about the position of UAV 100 relative to a selected target 118)”).
Claims 3, 4, 24, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Beard in view of Yahata further in view of Bostick et al. (US 9,510,051 B1, hereinafter “Bostick”).
Regarding claim 3, Beard in view of Yahata teaches the system of claim 1.
However, Beard in view of Yahata does not explicitly state further comprising:
a consumption component configured to obtain consumption information associated with a user consuming the images and/or the video segments.
In the same field of endeavor, Bostick teaches further comprising:
a consumption component configured to obtain consumption information associated with a user consuming the images and/or the video segments (Bostick at col. 12, ln. 60-65: “In process 206, overlay program 124 retrieves the viewing habits of the user from profile data 126. As a user views video data 114 from content provider 110, overlay program 124 stores respective content data 116 regarding the video data 114 when viewed by the user via video player program”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Beard in view of Yahata by adding the consumption component of Bostick with a reasonable expectation of success. The motivation to modify the system of Beard in view of Yahata further in view of Bostick is to provide efficient management of user preference.
Regarding claim 4, Beard in view of Yahata teaches the system of claim 1.
However, Beard in view of Yahata does not explicitly state further comprising:
a consumption component configured to track user engagement and/or viewing habits during a video segment and/or during at least one portion of the video segment.
In the same field of endeavor, Bostick teaches further comprising:
a consumption component configured to track user engagement and/or viewing habits during a video segment and/or during at least one portion of the video segment (Bostick at col. 12, ln. 60-65: “In process 206, overlay program 124 retrieves the viewing habits of the user from profile data 126. As a user views video data 114 from content provider 110, overlay program 124 stores respective content data 116 regarding the video data 114 when viewed by the user via video player program”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Beard in view of Yahata by adding the consumption component of Bostick with a reasonable expectation of success. The motivation to modify the system of Beard in view of Yahata further in view of Bostick is to provide efficient management of user preference.
Regarding claim 24, Beard in view of Yahata teaches the method of claim 23.
However, Beard in view of Yahata does not explicitly state further comprising:
tracking user engagement and/or viewing habits, with a consumer component, during a video segment and/or during at least one portion of the video segment.
In the same field of endeavor, Bostick teaches further comprising:
tracking user engagement and/or viewing habits, with a consumer component, during a video segment and/or during at least one portion of the video segment (Bostick at col. 12, ln. 60-65: “In process 206, overlay program 124 retrieves the viewing habits of the user from profile data 126. As a user views video data 114 from content provider 110, overlay program 124 stores respective content data 116 regarding the video data 114 when viewed by the user via video player program”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Beard in view of Yahata by adding the consumption component of Bostick with a reasonable expectation of success. The motivation to modify the method of Beard in view of Yahata further in view of Bostick is to provide efficient management of user preference.
Regarding claim 31, Beard in view of Yahata teaches the non-transitory computer-readable storage medium of claim 30.
However, Beard in view of Yahata does not explicitly state wherein the processor further comprises:
a consumption component configured to obtain consumption information associated with a user consuming the images and/or the video segments.
In the same field of endeavor, Bostick teaches wherein the processor further comprises:
a consumption component configured to obtain consumption information associated with a user consuming the images and/or the video segments (Bostick at col. 12, ln. 60-65: “In process 206, overlay program 124 retrieves the viewing habits of the user from profile data 126. As a user views video data 114 from content provider 110, overlay program 124 stores respective content data 116 regarding the video data 114 when viewed by the user via video player program”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the medium of Beard in view of Yahata by adding the consumption component of Bostick with a reasonable expectation of success. The motivation to modify the medium of Beard in view of Yahata further in view of Bostick is to provide efficient management of user preference.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Beard in view of Yahata further in view of Wang et al. (US 2019/0011921 A1, hereinafter “Wang”) and Gong et al. (US 2018/0068567 A1, hereinafter “Gong”).
Regarding claim 6, Beard in view of Yahata teaches the system of claim 5.
However, Beard in view of Yahata does not explicitly state wherein the flight control settings are associated with the user ID that and are based upon the images and/or the video segments viewed by the user associated with the user ID.
In the same field of endeavor, Wang teaches wherein the flight control settings are associated with (Wang at para. [0068]: “The improved flight control and tracking capabilities may further allow a UAV to automatically detect one or more stationary/moving target objects and to autonomously track the target objects”; para. [0081]: “One or more sensors may be provided as a payload, and may be capable of sensing the environment. The one or more sensors may include an imaging device”; para. [0082]: “The imaging device can be a camera. A camera can be a movie or video camera that captures dynamic image data (e.g., video)”; para. [0097]: “The image on the display may show a view collected with aid of a payload of the movable object” “A user at a user terminal may select a portion of the image collected by the imaging device to specify a target and/or direction of motion by the movable object”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Beard in view of Yahata by adding the flight control settings of Wang with a reasonable expectation of success. The motivation to modify the system of Beard in view of Yahata further in view of Wang is to provide efficient operation of aerial vehicles.
However, Beard in view of Yahata further in view of Wang does not explicitly state the user ID, the user associated with the user ID.
In the same field of endeavor, Gong teaches the user ID, the user associated with the user ID (Gong at para. [0095]: “A user may have a user identifier (e.g., USER ID1, USER ID2, USER ID3, ... ) that identifies the user. The user identifier may be unique to the user”; para. [0096]: “An authentication process may include a verification of the user's identity”; para. [0123]: “An air control system 230 may interact with the authentication center 220. The air control system may obtain information, about the user and the UAV (and/or any other devices involved in the UAV safety system) from the authentication
center (Connection 4)” “The air control system may be a management cluster that may include one or more subsystems, such as a flight supervision module 240, flight regulation module 242, traffic management module 244, user access control module 246, and UAV access control module 248. The one or more subsystems may be used for flight control, air traffic control, relevant authorization, user and UAV access management, and other functions”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Beard in view of Yahata further in view of Wang by adding the user ID of Gong with a reasonable expectation of success. The motivation to modify the system of Beard in view of Yahata further in view of Wang and Gong is to provide safe operation of UAVs by associating the user with the user ID for authentication.
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Beard in view of Yahata further in view of Wang.
Regarding claim 26, Beard in view of Yahata teaches the method of claim 25.
However, Beard in view of Yahata does not explicitly state further comprising:
associating the flight control settings with a user that views the images and/or the video segments.
In the same field of endeavor, Wang teaches further comprising:
associating the flight control settings with a user that views the images and/or the video segments (Wang at para. [0068]: “The improved flight control and tracking capabilities may further allow a UAV to automatically detect one or more stationary/moving target objects and to autonomously track the target objects”; para. [0081]: “One or more sensors may be provided as a payload, and may be capable of sensing the environment. The one or more sensors may include an imaging device”; para. [0082]: “The imaging device can be a camera. A camera can be a movie or video camera that captures dynamic image data (e.g., video)”; para. [0097]: “The image on the display may show a view collected with aid of a payload of the movable object” “A user at a user terminal may select a portion of the image collected by the imaging device to specify a target and/or direction of motion by the movable object”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Beard in view of Yahata by adding the flight control settings of Wang with a reasonable expectation of success. The motivation to modify the method of Beard in view of Yahata further in view of Wang is to provide efficient operation of aerial vehicles.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be found in the attached PTO-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JISUN CHOI whose telephone number is (571)270-0710. The examiner can normally be reached Mon-Fri, 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Browne can be reached at (571)270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JISUN CHOI/Examiner, Art Unit 3666
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666