Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “setting unit” “generation unit” “selection unit” “recognition unit” “excitement recognition unit” “graphic recognition unit” “text recognition unit” in claim 1, 3-10, 12-20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim limitations “setting unit” “generation unit” “selection unit” “recognition unit” “excitement recognition unit” “graphic recognition unit” “text recognition unit” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The disclosure is devoid of any structure that performs the function in the claim. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to a program.
The courts have identified that products that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations are not directed to any of the statutory categories.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6, 9-12 and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shahraray et al. (US 2011/0093798 A1).
Regarding claim 1, 19, and 20, Shahraray discloses an information processing apparatus, method, and program for causing a computer to function comprising:
A setting unit that performs setting of a recognition unit that detects detection metadata which is metadata regarding a predetermined recognition target by performing recognition processing on the recognition target (See [0024-0025] plurality of scenes are automatically detected in the content using pattern matching and metadata searching) on a basis of a sample scene which is a scene of a content designated by a user (See [0025] a user may specify or by user feedback specify the importance of features for detected scenes); and
A generation unit that generates a cutting rule for cutting scenes from the content on a basis of the detection metadata detected by performing the recognition processing on the content as a processing target by the recognition unit in which the setting is performed and the sample scene (The scenes are analyzed for relevance according the feature detected and ranked according to the detected features, i.e., reads on a cutting rule for scenes from the content. The features are weighted based on the user input and the least relevant scenes are removed from the plurality of scenes in step 408, see [0024-0025]).
Regarding claim 2, Shahraray further discloses the information processing apparatus according to claim 1, wherein the sample scene is obtained by designating IN points and OUT points of some scenes of the content, or is designated by being selected from scenes to become one or more candidates of the content (See [0024-0026] a user may specify the features which are most important to be included in the output combined scene. See also Fig 5 and [0028-0032] where a relevancy of a frame object can be determined based on pattern matching metadata of the source material).
Regarding claim 3, Shahraray further discloses the information processing apparatus according to claim 1, wherein the setting unit generates setting information to be set in the recognition unit on a basis of the detection metadata that characterizes the sample scene or the IN point and the OUT point of the sample scene (See [0024-0025] the features designating the scene recognition features set by the users).
Regarding claim 4, Shahraray further discloses the information processing apparatus according to claim 1, wherein the setting unit generates setting information to be set in the recognition unit in accordance with an operation of a user for the detection metadata that characterized the sample scene or an IN point and an OUT point of the sample scene (See [0024-0025] a user sets the features designating the scene recognition).
Regarding claim 5, Shahraray further discloses the information processing apparatus according to claim 1, further comprising: a selection unit that selects a recognition unit to be used for the recognition processing on the content from a plurality of the recognition units on a basis of a history of an operation for designating an IN point and an OUT point of the sample scene (See [0024-0025] pattern matching or metadata search).
Regarding claim 6, Shahraray further discloses the information processing apparatus according to claim 4,
Wherein the recognition unit is an excitement recognition unit using excitement as the recognition target (See [0032] emotion determination),
The excitement recognition unit detects, as the detection metadata, an excitement score indicating a degree of excitement (See [0032] overall sentiment for frame based on sentiment determination), and
Setting unit generates a threshold value to be compared with the excitement score, as the setting information of the excitement score, as the setting information of the excitement recognition unit such that a section from the IN point to the OUT point of the sample scene is determined to be an excitement section based on the excitement score in determining the excitement section (See [0028-0036] Fig 5).
Regarding claim 9, Shahraray discloses the information processing apparatus according to claim 1, wherein the generation unit generates the cutting rule on a basis of a plurality of the sample scenes (See [0025] the features are a basis for detecting important scenes and are used to determine which scenes to remove. The cutting rule reads on ranking the scenes based on importance.).
Regarding claim 10, Shahraray discloses the information processing apparatus according to claim 9, wherein the generation unit generates the cutting rule on a basis of cutting metadata to be used for cutting of the scene from the content, which is generated on a basis of the detection metadata (See [0025] a scene may be considered relevant based on a number of predetermined factors and the features are weighted.).
Regarding claim 11, Shahraray discloses the information processing apparatus according to claim 10, wherein the cutting metadata includes at least one of metadata of an Appear type indicating appearance or disappearance of any target, metadata of an Exist type indicating that any target is preset or is not present, or metadata of a Change type indicating a change in a value (See [0025] face recognition indicating appearance).
Regarding claim 12, Shahraray discloses the information processing apparatus according to claim 10, wherein the generation unit generates, as the cutting rule, similar pieces of cutting metadata among the plurality of sample scenes (See [0025] A scene may be considered relevant (e.g., important) based on a number of predetermined factors such as clustering, difference from and/or similarity to other scenes, motion within the scene, face recognition within the scene, etc. Accordingly, at least one feature is detected in each scene and the scenes are ranked according to the detected at least one feature.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shahraray et al. (US 2011/0093798 A1) in view of Konig et al. (US 2006/0195859 A1).
Regarding claim 7, Shahraray discloses the information processing apparatus according to claim 4, but does not explicitly disclose wherein the recognition unit is a graphic recognition unit using a graphic as the recognition target, the graphic recognition unit recognizes a graphic in a predetermined region in a frame of the content, and the setting unit generates a region which is a target of graphic recognition as the setting information of the graphic recognition unit in accordance with an operation of a user for a region surrounding a graphic detected in a frame in a vicinity of each of the IN point and the OUT point of the sample scene.
Konig discloses that it was known to recognize graphics in video (See [0078]).
Prior to the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the known system of Sharraray with the known methods of Konig predictably resulting in wherein the recognition unit is a graphic recognition unit using a graphic as the recognition target, the graphic recognition unit recognizes a graphic in a predetermined region in a frame of the content, and the setting unit generates a region which is a target of graphic recognition as the setting information of the graphic recognition unit in accordance with an operation of a user for a region surrounding a graphic detected in a frame in a vicinity of each of the IN point and the OUT point of the sample scene by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of detecting scenes such as advertisements as suggested by Konig.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shahraray et al. (US 2011/0093798 A1) in view of Watanabe (US 2019/0294911 A1).
Regarding claim 8, Shahraray discloses the information processing apparatus according to claim 4, but does not explicitly disclose wherein the recognition unit is a text recognition unit using a text as the recognition target, the text recognition unit performs character recognition in a predetermined region in a frame of the content, and
The setting unit selects, as a region as setting information to be set in the text recognition unit, the region of a specific character attribute among character attributes indicating meanings of characters in the region, which are obtained by meaning estimation of the characters recognized by the character recognition of the characters in the region, from regions surrounding characters detected in character detection of one frame of the sample scene.
Watanabe discloses that it was known to perform text recognition by recognizing characters (See [0026-0041]).
Prior to the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the known system of Shahraray with the known methods of Watanabe predictably resulting in the recognition unit is a text recognition unit using a text as the recognition target, the text recognition unit performs character recognition in a predetermined region in a frame of the content, and the setting unit selects, as a region as setting information to be set in the text recognition unit, the region of a specific character attribute among character attributes indicating meanings of characters in the region, which are obtained by meaning estimation of the characters recognized by the character recognition of the characters in the region, from regions surrounding characters detected in character detection of one frame of the sample scene by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of recognizing scenes based on text characters.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shahraray et al. (US 2011/0093798 A1) in view of Joshi et al. (US 2010/0017818 A1).
Regarding claim 13, Shahraray discloses the information processing apparatus according to claim 12 and discloses , but does not explicitly disclose wherein the generation unit generates, as the cutting rule, IN point metadata which is the cutting metadata to be used for detection of an IN point of cut scene cut from the content, OUT point metadata which is the cutting metadata to be used for detection of an OUT point of the cut scene, and a section event which is the event to be used for detection of an event which is a combination of one or more pieces of cutting metadata present in the cut scene.
Joshi discloses that it was known to identify scenes in content using metadata including time stamps (See [0014] [0022])
Prior to the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the known system of Shahraray with the known methods of Joshi predictably resulting in the generation unit generates, as the cutting rule, IN point metadata which is the cutting metadata to be used for detection of an IN point of cut scene cut from the content, OUT point metadata which is the cutting metadata to be used for detection of an OUT point of the cut scene, and a section event which is the event to be used for detection of an event which is a combination of one or more pieces of cutting metadata present in the cut scene by applying the court recognized rational of applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The modification would have the benefit of providing additional means for identifying segments in content.
Allowable Subject Matter
Claim 14-18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the claims were amended to overcome the issues with respect to 35 USC 112 as indicated above.
The following is a statement of reasons for the indication of allowable subject matter:
The prior art fails to disclose or fairly suggest, alone or in combination, all of the features of dependent claim 14-18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FERNANDO ALCON whose telephone number is (571)270-5668. The examiner can normally be reached Monday-Friday, 9:00am-7:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
FERNANDO . ALCON
Examiner
Art Unit 2425
/FERNANDO ALCON/ Primary Examiner, Art Unit 2425