DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Amendment filed 19 August 2025 (hereinafter “the Amendment”) has been entered and considered. Claims 1-3, 5, and 7-17 have been amended. Claims 1-17, all the claims pending in the application, are rejected. All new grounds of rejection set forth in the present action were necessitated by Applicant’s claim amendments; accordingly, this action is made final.
Response to Amendments
Specification
In view of Applicant’s amendment to the title, the objection is withdrawn.
Claim Objections
In view of the change in dependency of claim 10, the claim objection is withdrawn.
Claim Interpretation
Independent claim 1 has been amended to recite “circuitry” which modifies the claimed functional language. Since the claim recites sufficient structure, the claims are no longer interpreted as invoking 35 USC 112(f).
Claim Rejections - 35 USC § 101 (Program per se)
In view of the amendments to claims 12, 14, 15, and 17, the rejections of these claims under 35 USC 101 as being directed to software per se are withdrawn.
Claim Rejections - 35 USC § 101 (Abstract Idea)
On page 12 of the Amendment, Applicant asserts that the claims do not recite an abstract idea. However, Applicant does not provide any evidence or rationale in support of this argument. Applicant also fails to address the Examiner’s position, as clearly articulated on page 9 of the previous action, that a human receiving video can visually observe a scene in video and mentally evaluate LUT data for correcting the scene. According to MPEP 2106.04(a)(2), such “observations” and “evaluations” qualify as a mental process. Accordingly, the specifying and setting steps in the independent claims recite mental processes (abstract ideas), contrary to Applicant’s assertions.
On pages 13-15 of the Amendment, Applicant contends that the claimed subject matter relates to a specific improvement, thus concluding that the claims recite eligible subject matter. However, Applicant does not point out what that alleged improvement is.
According to MPEP 2106.04(d)(1):
“first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. The claim itself does not need to explicitly recite the improvement described in the specification” (emphasis added).
Not only has Applicant failed to point out the areas of the specification that discuss the alleged improvement, Applicant has also failed to point out how the alleged improvement is reflected in the claims. The Examiner maintains that, to whatever extent the specification describes an improvement to a technical field, such an improvement is not reflected in the broad claim language.
Finally, the Examiner notes that the newly added features of the independent claims do not integrate the abstract idea into a practical application or add significantly more, at least for the reasons detailed in the rejection below.
For all the foregoing reasons, the rejections under 35 USC 101 (Abstract Idea) are maintained.
Prior Art Rejections
Independent claims 1 and 12-17 have been amended to recite “wherein color grading is applied to the video data according to the set LUT data”. On pages 15-21 of the Amendment, Applicant argues that the applied art does not teach or suggest each and every feature recited in the independent claims. However, Applicant does not provide any evidence from the references or rationale in support of this general assertion. The Examiner maintains that the applied art teaches the newly added features of the independent claims.
Funayama discloses retrieving “image correction information” such as “color corrections” for correcting the video images of a scene “mak[ing] a reference to the database of storage unit 3” in which “image correction information…and an imaging object or an imaging condition…[are] stored correspondingly”, wherein the imaging object and the imaging condition are retrieved from the imaging information database 41 based on the latitude/longitude, direction, date, and other setting information ([0062-0077]). These portions of the reference further disclose “subjecting the image data” to a “correction process” which is “based upon the correction information obtained”. Here, Funayama clearly discloses setting data to be applied to the scene based on the setting information, wherein color correction is applied to the video data according to the set data, as claimed.
Although the retrieval process disclosed by Funayama and discussed above appear to reference table data (see tables stored in the imaging information database 41 in Figs. 2-7), Funayama does not expressly disclose that the setting information and the data to be applied to the scene are in the particular format of a look up table LUT. Also, while Funayama discloses a variety of color corrections that are applied to the video, the reference does not expressly disclose that the color correction include color grading.
Loeffler, like Funayama, is directed to performing color correction (“color grading”) in video based on metadata (Abstract and [0012]). Loeffler discloses that the “color grading information” is stored and accessed as metadata using a “LUT” for “application to the selected image” in video ([0013-0022]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Funayama to store the color grading information in a LUT for retrieval and application to video images, as taught by Loeffler, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have saved processing time by virtue of using the simple array indexing of a LUT. Additionally, it is predictable that “color grading of selected images in such a manner affords the ability to enhance the motion picture feature presentation beyond the color properties of the original camera negative film stock, or in the case of a digitally originated movie, the color properties of the digital camera(s) that originally captured the images” ([0003] of Loeffler).
Thus, contrary to Applicant’s assertions, the proposed combination of Funayama and Loeffler does indeed teach the newly added features of the independent claims. Accordingly, the prior art rejections are maintained, as detailed below.
Claim Objections
Claim 15 is objected to because of the following informalities:
Independent claim 15 has been amended to recite “generate generates”. It appears the latter recitation “generates” was intended to be canceled from the claim.
Appropriate correction is required.
Claim Rejections - 35 USC § 101 (Abstract Idea)
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e. process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. Three categories of subject matter are found to be judicially recognized exceptions to 35 U.S.C. § 101 (i.e. patent ineligible) (1) laws of nature, (2) physical phenomena, and (3) abstract ideas. MPEP 2106(II). To be patent-eligible, a claim directed to a judicial exception must as whole be integrated into a practical application or directed to significantly more than the exception itself (MPEP 2106). Hence, the claim must describe a process or product that applies the exception in a meaningful way, such that it is more than a drafting effort designed to monopolize the exception. Id
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without integration into a practical application or recitation of significantly more.
Independent claims 1 and 12-14
In the analysis below, the system of independent claim 1 is considered representative of independent claims 1 and 12-14 since all of the independent claims recite identical steps despite being directed to different statutory matter. Furthermore, each of independent claims 1 and 12-14 is directed to one of the four statutory categories of eligible subject matter; thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106).
Step 2A, prong 1 analysis
Independent claims 1 and 12-14 are directed to specify a scene in the video data based on the scene specifying information, and set LUT data to be applied to the scene based on the LUT setting information.
Each of the above steps can be performed mentally. In particular, a human receiving video data, scene specifying information, and LUT setting information can visually identify a scene in video and mentally evaluate LUT data for correcting the scene. The human might even write down the LUT data. As such, the description in independent claims 1 and 12-14 is an abstract idea – namely, a mental process which can be performed in the human mind with the aid of pen and paper. Notably, “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation” (MPEP § 2106.04(a)(2)(III)). Accordingly, the analysis under prong one of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Additional elements
The additional elements recited in each of independent claims 1 and 12-14 are the steps of acquire video data captured by the imaging device, scene specifying information, and LUT setting information from the imaging device, and wherein color grading is applied to the video data according to the set LUT data. Independent claim 1 further recites the additional element of an image processing system comprising an imaging device and an information processing device including circuitry that performs the claimed steps. Independent claim 12 further recites an information processing device configuring circuitry that performs the claimed steps. Independent claim 14 further recites a non-transitory computer-readable storage medium having embodied thereon an information processing program which when executed by a computer causes the computer to execute an information processing method comprising the claimed steps.
Step 2A, prong 2 analysis
The above-identified additional elements do not integrate the judicial exception into a practical application.
The step of acquires video data captured by the imaging device, scene specifying information, and LUT setting information from the imaging device amounts to data gathering which is insignificant pre-solution activity which does not integrate the claimed mental process into a practical application (See MPEP 2106.05(g)).
The step of color grading is applied to the video data according to the set LUT data is a post-processing step that is insignificant post-solution activity which does not integrate the claimed mental process into a practical application (See MPEP 2106.05(g)).
Each of the other additional elements (image processing system, imaging device, information processing device, circuitry, computer-readable storage medium, and information processing program) amounts to merely using a computer as a tool to perform the claimed mental process. Implementing an abstract idea on a computer does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)).
Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Step 2B
Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As noted above, the step of acquires video data captured by the imaging device, scene specifying information, and LUT setting information from the imaging device amounts to data gathering which is insignificant pre-solution activity. Such insignificant pre-solution activity does not constitute significantly more than the claimed mental process (See MPEP 2106.05(g)).
The step of color grading is applied to the video data according to the set LUT data is a post-processing step that is insignificant post-solution activity which does not constitute significantly more than the claimed mental process (See MPEP 2106.05(g)).
Each of the other additional elements (image processing system, imaging device, information processing device, circuitry, computer-readable storage medium, and information processing program) are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea).
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
For all of the foregoing reasons, independent claims 1 and 12-14 do not recite eligible subject matter under 35 USC 101.
Dependent claims 2-11 are dependent on independent claim 1 and therefore include all of the limitations of claim 1. Therefore, claims 2-11 recite the same abstract idea of a mental process which can be performed in the mind with the aid of pen and paper.
Claim 2 recites wherein the scene specifying information includes at least one of information regarding an environment at a time of imaging by the imaging device, information related to an imaging function by the imaging device, or information regarding a reproduction position of the video data. This features merely describes the data that is gathered in the data gathering step. Such data gathering is insignificant pre-solution activity which does not integrate the claimed mental process into a practical application or add significantly more (See MPEP 2106.05(g)).
Claim 3 recites wherein the LUT setting information includes at least one of information regarding an environment at a time of imaging by the imaging device or information related to an imaging function by the imaging device. This features merely describes the data that is gathered in the data gathering step. Such data gathering is insignificant pre-solution activity which does not integrate the claimed mental process into a practical application or add significantly more (See MPEP 2106.05(g)).
Claim 4 recites wherein the video data is associated with the scene specifying information for each frame constituting the video data. This features merely describes relationships between the data that is gathered in the data gathering step. Such data gathering is insignificant pre-solution activity which does not integrate the claimed mental process into a practical application or add significantly more (See MPEP 2106.05(g)).
Claim 5 recites wherein the information processing device specifies at least one frame associated with the scene specifying information matching a condition specified by a user as the scene. This specifying step is an evaluation that can be performed mentally. Thus, the limitation is part of the mental process. The claim does not recite any additional elements.
Claim 6 recites wherein the LUT data is associated with the LUT setting information. This features merely describes relationships between the data that is gathered in the data gathering step. Such data gathering is insignificant pre-solution activity which does not integrate the claimed mental process into a practical application or add significantly more (See MPEP 2106.05(g)).
Claim 7 recites wherein the circuitry sets LUT data associated with the LUT setting information matching a condition specified by a user as LUT data to be applied to the scene. This step is an evaluation that can be performed mentally with the aid of pen and paper (for example, evaluating a match between data and writing down the evaluated data in a table format). Thus, the limitation is part of the mental process. The claim does not recite any additional elements.
Claim 8 recites wherein the circuitry is further configured to generate an LUT application table in association with the condition and the LUT data associated with the LUT setting information matching the condition. This step is an evaluation that can be performed mentally with the aid of pen and paper (for example, evaluating a match between data and writing down the evaluated data in a table format). Thus, the limitation is part of the mental process. The claim does not recite any additional elements.
Claim 9 recites that the circuitry is further configured to apply color grading to the video data by applying the LUT data set by referring to the LUT application table. This additional element is considered insignificant post-solution activity which does not integrate the claimed mental process into a practical application or add significantly more (See MPEP 2106.05(g)).
Claim 10 recites wherein in a case where there is a plurality of pieces of the LUT data associated with the LUT setting information matching the condition, the circuitry sets one piece of the LUT data selected by presenting the plurality of pieces of the LUT data to the user as LUT data to be applied to the scene. This step of user selection from a plurality of provided options amounts to a mental evaluation. Thus, the limitation is part of the mental process. The claim does not recite any additional elements.
Claim 11 recites wherein in a case where a plurality of scenes is specified from the video data based on the scene specifying information, same LUT data is set to be applied to the plurality of scenes based on the LUT setting information. Since a human can observationally identify a plurality of scenes and write down data in a table format, the limitation is part of the mental process. The claim does not recite any additional elements.
Independent claims 15-17
In the analysis below, the method of independent claim 16 is considered representative of independent claims 15-17 since all of these independent claims recite identical steps despite being directed to different statutory matter. Furthermore, independent claims 15-17 are directed to one of the four statutory categories of eligible subject matter; thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106).
Step 2A, prong 1 analysis
Independent claims 15-17 are directed to extracting a scene from the video data based on scene specifying information, and setting LUT data to be applied to the scene based on LUT setting information.
Each of the above steps can be performed mentally. In particular, a human receiving video data, scene specifying information, and LUT setting information can visually identify a scene in video and mentally evaluate LUT data for correcting the scene. The human might even write down the LUT data. As such, the description in independent claims 15-17 is an abstract idea – namely, a mental process which can be performed in the human mind with the aid of pen and paper. Notably, “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation” (MPEP § 2106.04(a)(2)(III)). Accordingly, the analysis under prong one of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Additional elements
The additional elements recited in each of independent claims 15-17 are the steps of generating video data by imaging and wherein color grading is applied to the video data according to the set LUT data. Independent claim 15 further recites the additional element of an imaging device comprising circuitry that performs the claimed steps. Independent claim 17 further recites a non-transitory computer-readable storage medium having embodied thereon an information processing program which when executed by a computer causes the computer to execute an information processing method comprising the claimed steps.
Step 2A, prong 2 analysis
The above-identified additional elements do not integrate the judicial exception into a practical application.
The step of generating video data by imaging amounts to data gathering which is insignificant pre-solution activity which does not integrate the claimed mental process into a practical application (See MPEP 2106.05(g)).
The step of color grading is applied to the video data according to the set LUT data is a post-processing step that is insignificant post-solution activity which does not integrate the claimed mental process into a practical application (See MPEP 2106.05(g)).
Each of the other additional elements (imaging device, computer-readable storage medium, and control program) amounts to merely using a computer as a tool to perform the claimed mental process. Implementing an abstract idea on a computer does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)).
Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Step 2B
Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As noted above, the step of generating video data by imaging amounts to data gathering which is insignificant pre-solution activity. Such insignificant pre-solution activity does not constitute significantly more than the claimed mental process (See MPEP 2106.05(g)).
The step of color grading is applied to the video data according to the set LUT data is a post-processing step that is insignificant post-solution activity which does not constitute significantly more than the claimed mental process (See MPEP 2106.05(g)).
Each of the other additional elements (imaging device, computer-readable storage medium, and control program) are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea).
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
For all of the foregoing reasons, independent claims 15-17 do not recite eligible subject matter under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2007/0255456 to Funayama et al. (hereinafter “Funayama”) in view of U.S. Patent Application Publication No. 2015/0009227 to Loeffler et al. (hereinafter “Loeffler”).
As to independent claim 1, Funayama discloses an information processing system comprising an imaging device (“terminal” containing image sensor 1; see [0027-0035, 0113-0130] and Figs. 8-9, for example) and an information processing device, wherein the information processing device includes circuitry to ([0027-0035, 0113-0130] and Figs. 8-9, for example, disclose a “server” comprising storage unit 3, imaging information analysis unit 4, image processing unit 5) acquire video data captured by the imaging device ([0063, 0066-0067, 0072, 0077] discloses that image processing unit 5 of the server receives imaging data captured by image sensor 1 which is a “video camera” of the terminal), scene specifying information, and setting information from the imaging device ([0058-0064, 0072-0077] discloses that imaging information analysis unit 4 of the server receives information corresponding to the imaging data, “such as latitude/longitude, a direction and a date” as well as “weather at the time of imaging, an imaging angle or direction, etc.”, from the imaging information obtainment unit 2 of the terminal), specify a scene in the video data based on the scene specifying information ([0072] discloses that the images “are divided into the image of a close scene and the image of a far scene according to the view angle of the imaging lens”; see also [0081-0085, 0092] wherein a “night scene” is delineated from a day scene based on the date information), and set data to be applied to the scene based on the setting information, wherein color correction is applied to the video data according to the set data ([0062-0077] discloses retrieving “image correction information” such as “color corrections” for correcting the video images of the scene “mak[ing] a reference to the database of storage unit 3” in which “image correction information…and an imaging object or an imaging condition…[are] stored correspondingly”, wherein the imaging object and the imaging condition are retrieved from the imaging information database 41 based on the latitude/longitude, direction, date, and other setting information; see at least [0060-0061]; these portions further disclose “subjecting the image data” to a “correction process” which is “based upon the correction information obtained”).
Although the retrieval process disclosed by Funayama and discussed above appear to reference table data (see tables stored in the imaging information database 41 in Figs. 2-7), Funayama does not expressly disclose that the setting information and the data to be applied to the scene are in the particular format of a look up table LUT. Also, while Funayama discloses a variety of color corrections that are applied to the video, the reference does not expressly disclose that the color correction include color grading.
Loeffler, like Funayama, is directed to performing color correction (“color grading”) in video based on metadata (Abstract and [0012]). Loeffler discloses that the “color grading information” is stored and accessed as metadata using a “LUT” for “application to the selected image” in video ([0013-0022]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Funayama to store the color grading information in a LUT for retrieval and application to video images, as taught by Loeffler, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have saved processing time by virtue of using the simple array indexing of a LUT. Additionally, it is predictable that “color grading of selected images in such a manner affords the ability to enhance the motion picture feature presentation beyond the color properties of the original camera negative film stock, or in the case of a digitally originated movie, the color properties of the digital camera(s) that originally captured the images” ([0003] of Loeffler).
As to claim 2, Funayama as modified by Loeffler above further teaches that the scene specifying information includes at least one of information regarding an environment at a time of imaging by the imaging device, information related to an imaging function by the imaging device, or information regarding a reproduction position of the video data ([0058-0064, 0072-0077] of Funayama discloses that the information corresponding to the imaging data includes latitude/longitude, direction, date and weather at the time of imaging which all related to the imaging environment, an imaging angle or direction which relate to an imaging function of the imaging device, wherein any aspect of this data can correspond to the claimed scene specifying information).
As to claim 3, Funayama as modified by Loeffler above further teaches that the LUT setting information includes at least one of information regarding an environment at a time of imaging by the imaging device, or information related to an imaging function by the imaging device ([0058-0064, 0072-0077] of Funayama discloses that the information corresponding to the imaging data includes latitude/longitude, direction, date and weather at the time of imaging which all related to the imaging environment, an imaging angle or direction which relate to an imaging function of the imaging device, wherein any aspect of this data can correspond to the claimed setting information; [0013-0022] of Loeffler discloses that the color grading information is stored and accessed as metadata using a LUT; the reasons for combining the references are the same as those discussed above in conjunction with claim 1).
As to claim 4, Funayama as modified by Loeffler above further teaches that the video data is associated with the scene specifying information for each frame constituting the video data ([0058-0067] of Funayama discloses that the imaging information is obtained for each obtained image data (frame) of the captured video).
As to claim 6, Funayama as modified by Loeffler above further teaches that the LUT data is associated with the LUT setting information ([0062-0077] of Funayama discloses “database of storage unit 3” in which “image correction information…and an imaging object or an imaging condition…[are] stored correspondingly”, wherein the imaging object and the imaging condition are retrieved by the imaging information database 41 based on the latitude/longitude, direction, date, and other setting information; see at least [0060-0061]; [0013-0022] of Loeffler discloses that the color grading information is stored and accessed as metadata using a LUT; the reasons for combining the references are the same as those discussed above in conjunction with claim 1).
Independent claim 12 recites an information processing device nearly identical to (though broader than) the one recited in the system of independent claim 1. Accordingly, claim 12 is rejected for reasons analogous to those discussed above in conjunction with claim 1.
Independent claim 13 recites an information processing method comprising the steps recited in the system of independent claim 1. Accordingly, claim 13 is rejected for reasons analogous to those discussed above in conjunction with claim 1.
Independent claim 14 recites a non-transitory computer-readable storage medium having embodied thereon an information processing program, which when executed by a computer causes the computer to execute an information processing method ([0132-0139] of Funayama discloses that the algorithm may be realized by a “computer program” that is stored on a “recording medium” and that causes the processor to perform the processes of the disclosed algorithm) comprising the steps recited in the system of independent claim 1. Accordingly, claim 14 is rejected for reasons analogous to those discussed above in conjunction with claim 1.
Claims 5 and 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Funayama in view of Loeffler and further in view of U.S. Patent Application Publication No. 2016/0055885 to Hodulik et al. (hereinafter “Hodulik”).
As to claim 5, Funayama as modified by Loeffler above further teaches that the information processing device specifies at least one frame associated with the scene specifying information matching a condition as the scene ([0072] of Funayama discloses that the images “are divided into the image of a close scene and the image of a far scene according to the view angle of the imaging lens”; see also [0081-0085, 0092] wherein a “night scene” is delineated from a day scene based on the date information, wherein the night scene is categorized as an “imaging condition” stored in conjunction with the date information; see tables in “imaging information database 41” of Figs. 2-7).
Funayama as modified by Loeffler above does not expressly disclose that the condition is
specified by a user.
Hodulik, like Funayama and Loeffler, is directed to video analysis based on metadata including “information about the video itself, the camera used to capture the video, the environment or setting in which a video is captured or any other information associated with the capture of the video” (Abstract and [0020]). Hodulik discloses that a user may use an interface to select metadata filters such as “the time and date (sic) at which the video was captured, or the type of camera 130 used by the user” in order to filter particular scenes of the video ([0032]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Funayama and Loeffler to allow a user to filter scenes of interest using metadata, as taught by Hodulik, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have been to present video to the user that “include[s] events of interest” ([0042] of Hodulik).
As to claim 7, Funayama as modified by Loeffler above further teaches that the circuitry sets LUT data associated with the LUT setting information matching a condition as LUT data to be applied to the scene ([0062-0077] of Funayama discloses retrieving “image correction information” such as “color corrections” for correcting the video images of the scene “mak[ing] a reference to the database of storage unit 3” in which “image correction information…and an imaging object or an imaging condition…[are] stored correspondingly” (emphasis added), wherein the imaging object and the imaging condition are retrieved by the imaging information database 41 based on the latitude/longitude, direction, date, and other setting information; see at least [0060-0061]; [0013-0022] of Loeffler discloses that the color grading information is stored and accessed as metadata using a LUT; the reasons for combining the references are the same as those discussed above in conjunction with claim 1).
Funayama as modified by Loeffler above does not expressly disclose that the condition is specified by a user.
Hodulik, like Funayama and Loeffler, is directed to video analysis based on metadata including “information about the video itself, the camera used to capture the video, the environment or setting in which a video is captured or any other information associated with the capture of the video” (Abstract and [0020]). Hodulik discloses that a user may use an interface to select metadata filters such as “the time and date (sic) at which the video was captured, or the type of camera 130 used by the user” in order to filter particular scenes of the video ([0032]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Funayama and Loeffler to allow a user to filter scenes of interest using metadata, as taught by Hodulik, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have been to present video to the user that “include[s] events of interest” ([0042] of Hodulik).
As to claim 8, the proposed combination of Funayama, Loeffler, and Hodulik further teaches that the circuitry is further configured to generate an LUT application table in association with the condition and the LUT data associated with the LUT setting information matching the condition ([0062-0077] of Funayama discloses retrieving “image correction information” such as “color corrections” for correcting the video images of the scene “mak[ing] a reference to the database of storage unit 3” in which “image correction information…and an imaging object or an imaging condition…[are] stored correspondingly” (emphasis added), wherein the imaging object and the imaging condition are retrieved by the imaging information database 41 based on the latitude/longitude, direction, date, and other setting information; see at least [0060-0061] and tables in “imaging information database 41” of Figs. 2-7, wherein the tables must be generated; [0013-0022] of Loeffler discloses that the color grading information is stored and accessed as metadata using a LUT; the reasons for combining the references are the same as those discussed above in conjunction with claim 1).
As to claim 9, the proposed combination of Funayama, Loeffler, and Hodulik further teaches wherein the circuitry is further configured to apply color grading to the video data by applying the LUT data set by referring to the LUT application table ([0062-0077] of Funayama discloses retrieving “image correction information” such as “color corrections” for correcting the video images of the scene “mak[ing] a reference to the database of storage unit 3” in which “image correction information…and an imaging object or an imaging condition…[are] stored correspondingly”, wherein the imaging object and the imaging condition are retrieved by the imaging information database 41 based on the latitude/longitude, direction, date, and other setting information; see at least [0060-0061]; [0013-0022] of Loeffler discloses that the color grading information is stored and accessed as metadata using a LUT; the reasons for combining the references are the same as those discussed above in conjunction with claim 1).
As to claim 10, Funayama does not expressly disclose that in a case where there is a plurality of pieces of the LUT data associated with the LUT setting information matching the condition, the circuitry sets one piece of the LUT data selected by presenting the plurality of pieces of the LUT data to the user as LUT data to be applied to the scene.
Loeffler, like Funayama, is directed to performing color correction (“color grading”) in video based on metadata (Abstract and [0012]). Loeffler discloses that the color grading information is stored and accessed as metadata using a LUT ([0013-0022]). Loeffler further contemplates a scenario in which “a plurality of sets of predetermined color metadata created in advance of color grading for image preview” ([0014]). In order to “facilitate user selection of a desired image appearance, the user will make use of a graphical user interface (GUI)” in which a current frame of a scene is rendered with one of the sets of color metadata ([0015]). In this way, “the user can select a desired color metadata” to be applied to the scene ([0015-0016]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Funayama to present a user with a plurality of pre-determined color grading options for selection via a GUI, as taught by Loeffler, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have enhanced user experience by virtue of “allow[ing] for previewing an image file color graded in a desired manner” ([0005] of Loeffler).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Funayama in view of Loeffler and further in view of U.S. Patent Application Publication No. 2021/0076019 to Fujiwara et al. (hereinafter “Fujiwara”).
As to claim 11, Funayama discloses an identical process for identifying scenes and applying color correction thereto for all video frames (See above mapping of claim 1). Thus, in the event multiple scenes having nearly the same scene specifying information (e.g., geographical coordinates, date, etc.) the same image condition and image object information would be retrieved, thus resulting in the application of the same color correction. However, Funayama does not expressly contemplate such a scenario. That is, Funayama as modified by Loeffler does not expressly disclose that in a case where a plurality of scenes is specified from the video data based on the scene specifying information, same LUT data is set to be applied to the plurality of scenes on a basis of the LUT setting information.
Fujiwara, like Funayama and Loeffler, is directed to performing color correction in video scenes (Abstract and [0004]). In particular, Fujiwara contemplates a scenario in which video of a similar scene having a same subject is identified, in which case both scenes are subject to the same color correction (Abstract and [0079]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Funayama and Loeffler to apply a same color correction to a plurality of identified scenes (wherein the scenes would have the same scene specifying information, for example, by virtue of a common object therein), as taught by Fujiwara, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have made it “possible to reduce labor of the user” by virtue of applying the same correction to similar scenes ([0084] of Fujiwara).
Claims 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2007/0255456 to Funayama et al. (hereinafter “Funayama”) in view of U.S. Patent Application Publication No. 2016/0182815 to Urabe (hereinafter “Urabe”) and further in view of U.S. Patent Application Publication No. 2015/0009227 to Loeffler et al. (hereinafter “Loeffler”).
As to independent claim 15, Funayama discloses ([0063, 0066-0067, 0072, 0077] discloses that image processing unit 5 of the server receives imaging data captured by image sensor 1 which is a “video camera” of the terminal), extract a scene from the video data based on scene specifying information ([0058-0064, 0072-0077] discloses that imaging information analysis unit 4 of the server receives information corresponding to the imaging data, “such as latitude/longitude, a direction and a date” as well as “weather at the time of imaging, an imaging angle or direction, etc.”, from the imaging information obtainment unit 2 of the terminal; [0072] discloses that the images “are divided into the image of a close scene and the image of a far scene according to the view angle of the imaging lens”; see also [0081-0085, 0092] wherein a “night scene” is delineated from a day scene based on the date information), and sets correction is applied to the video data according to the set ([0062-0077] discloses retrieving “image correction information” such as “color corrections” for correcting the video images of the scene “mak[ing] a reference to the database of storage unit 3” in which “image correction information…and an imaging object or an imaging condition…[are] stored correspondingly”, wherein the imaging object and the imaging condition are retrieved from the imaging information database 41 based on the latitude/longitude, direction, date, and other setting information; see at least [0060-0061]; these portions further disclose “subjecting the image data” to a “correction process” which is “based upon the correction information obtained”).
Funayama does not expressly disclose that all of the above processing is performed within a same imaging device, instead disclosing a terminal comprising an imaging device which generates the video data and a separate server which extracts the scene and sets the correction data. Also, although the retrieval disclosed by Funayama and discussed above appear to reference table data (see tables in Figs. 2-7), Funayama does not expressly disclose that the setting information and the data to be applied to the scene are in the particular format of a look-up table look up table (LUT). Lastly, while Funayama discloses a variety of color corrections that are applied to the video, the reference does not expressly disclose that the color correction include color grading.
Urabe, like Funayama, is directed to video (“movie”) content editing comprising color correction (“color grading”; Abstract and [0003-0008, 0044-0047]), wherein the correction is applied to individual scenes ([0086-0090]). Urabe contemplates an arrangement similar to Funayama in which the adjusting parameter may be stored in and read from an external storage 215 (Fig. 2A and [0062]). Urabe also discloses an alternative embodiment in which the adjusting parameter is stored “in the storage unit 204 of the image capturing apparatus” ([0066] and Fig. 2B). In this arrangement, the image-quality adjusting parameter is recorded at the start of the recording process for all frames that are recorded, and then the adjusting parameter is read in from the storage unit 204 and automatically applied to the video frames at the time of reproducing the video ([0066-0075]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Funayama to perform all of the processing steps for retrieving the image correction parameters within a same imaging device that captures the images, as taught by Urabe, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art el