DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Continued Examination Under 37 CFR 1.114
3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/27/2025 has been entered.
Response to Amendment
4. Receipt of Applicant’s Amendment filed on 10/27/2025 is acknowledged. The amendment includes the cancellation of claims 11-12 and the amending of claims 1 and 14-15.
Claim Interpretation
5. Claims 1 and 3-10 are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Objections
6. Claims 1 and 14-15 are objected to because of the following informalities: The claim language of “which is Extensible Markup Language (XML) file” in the limitation “at least one processor configured to function as: a recording control unit configured to control to record, into a recording medium, a moving image file including moving image data obtained by performing image capturing and an additional information file which is Extensible Markup Language (XML) file and includes additional information of the moving image data” is grammatically incoherent and should be replaced with “at least one processor configured to function as: a recording control unit configured to control to record, into a recording medium, a moving image file including moving image data obtained by performing image capturing and an additional information file which is an Extensible Markup Language (XML) file and includes additional information of the moving image data”. Appropriate correction is required.
Dependent claims 3-10 and 13 are objected to for incorporating the deficiencies of independent claim 1.
Claims 1 and 14-15 are objected to because of the following informalities: The claim language of “is recorded as an XML file in same XML format file” in the limitation “wherein although a structure of the additional information is different depending on the recording format of the additional information selected by the selection unit, the additional information file is recorded as an XML file in same XML format file, regardless of the recording format of the additional information selected by the selection unit” is grammatically incoherent and should be replaced with “wherein although a structure of the additional information is different depending on the recording format of the additional information selected by the selection unit, the additional information file is recorded as an XML file in a same XML format file, regardless of the recording format of the additional information selected by the selection unit”. Appropriate correction is required.
Dependent claims 3-10 and 13 are objected to for incorporating the deficiencies of independent claim 1.
Claim Rejections - 35 USC § 112
7. The rejections raised in the Office Action mailed on 08/21/2025 has been overcome by applicant’s amendment received on 10/27/2025.
Claim Rejections - 35 USC § 103
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
9. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
10. Claims 1, 3-4, 6-7, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Sarubin et al. (U.S. PGPUB 2020/0210753), in view of Soneka et al. (JP 2009187213A (Machine Translation Provided), dated 20 August 2009), and in view of Hu et al. (Article entitled “Ontology Design for Online News Analysis”, dated 2009), and further in view of Ishizu et al. (U.S. PGPUB 2016/0065887).
11. Regarding claims 1, 14, and 15, Sarubin teaches an image capturing apparatus, method, and non-transitory computer readable storage medium comprising:
A) at least one processor configured to function as: a recording control unit configured to control to record, into a recording medium, a moving image file including moving image data obtained by performing image capturing and an additional information file which is file and includes additional information of the moving image data (Paragraph 19).
The examiner notes that Sarubin teaches “at least one processor configured to function as: a recording control unit configured to control to record, into a recording medium, a moving image file including moving image data obtained by performing image capturing and an additional information file which is file and includes additional information of the moving image data” as “the system 10 includes the interchange layer and registry classification service 14 which receives and interprets instructions for handling corresponding assets 24 and/or metadata 26 from various sources. The interchange layer and registry classification service 14 may provide an application programming interface (API) which provides a mechanism to supply metadata instructions, which will ultimately result in automatic mapping of metadata to prescribed registry IDs, as will be discussed in more detail below. The interchange layer and registry classification service 14 receives a collection of files. In this case, the interchange layer and registry classification service 14 receives multiple assets, such as a content file 24 and associated metadata 26. It may be appreciated that the interchange layer and registry classification service 14 can be accessed by and receive assets from a variety of production teams in the filmmaking process. In practice, several content files (e.g., multiple video clips, sequences, versions, etc.) may be received by the interchange layer and registry classification service 14, as may be several metadata-only files. It should also be noted that the term “files” as used in the present discussion, may include both stored data and streaming content. The content, depending upon the processing contexts, may sometimes be referred to as an asset. The metadata 26 will typically come from various sources (e.g., source 1, source 2, etc.) or vendors and will relate to the content and may provide additional information about the content, such as title, identifying data, source, various date and time stamps, and so forth. The metadata may be used to track the content and therefore facilitate its management through production, post production, storage, retrieval, commercial and non-commercial distribution, and so forth” (Paragraph 19). The examiner further notes that a system 10 that stored content file(s) 24 (which includes video) and corresponding metadata file(s) 26 (See “metadata-only files”) teaches the claimed moving image file and file that includes additional information respectively.
Sarubin does not explicitly teach:
A) an additional information file which is Extensible Markup Language (XML) file;
B) a selection unit configured to select a recording format of the additional information that is recorded in the additional information file in accordance with a user's operation;
C) wherein the recording control unit controls to automatically generate and record the additional information as the additional information file in the recording format selected by the selection unit into the recording medium;
D) wherein although a structure of the additional information is different depending on the recording format of the additional information selected by the selection unit.
Soneka, however, teaches “an additional information file which is Extensible Markup Language (XML) file” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2) and “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3), “a selection unit configured to select a recording format of the additional information that is recorded in the additional information file in accordance with a user's operation” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2) and “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3), “wherein the recording control unit controls to automatically generate and record the additional information as the additional information file in the recording format selected by the selection unit into the recording medium” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2) and “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3), and “wherein although a structure of the additional information is different depending on the recording format of the additional information selected by the selection unit” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2), “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3), and “Whether the format of the metadata file is XML or CSV is specified” (Page 4).
The examiner further notes that the secondary reference of Soneka teaches the concept of a user customizing (i.e. selecting) the format of metadata of a media file in either CSV or XML (i.e. metadata files with different structures). The combination would result in a user selecting the format of the metadata that is recorded in the primary reference of Sarubin. Moreover, although Sarubin teaches a separate metadata file (i.e. a file that houses additional information), there is no explicit teaching that such a metadata file is in an XML format. Nevertheless, Soneka teaches the concept of a metadata file (i.e. a file that houses additional information) being in an XML format. The combination would result in the metadata file of Sarubin to be in an XML format.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Soneka’s would have allowed Sarubin’s to provide a method for easily allowing users to modify metadata, as noted by Soneka (Page 1).
Saurbin and Soneka do not explicitly teach:
D) the additional information file is recorded as an XML file in same XML format file, regardless of the recording format of the additional information selected by the selection unit.
Hu, however, teaches “the additional information file is recorded as an XML file in same XML format file, regardless of the recording format of the additional information selected by the selection unit” as “In this paper, a news ontology incorporating some metadata in OpenCyc, EventsML-G2, NewsML-G2, and News Industry Text Format (NITF) is designed at first” (Page 202, (Abstract)), “Some metadata in EventsML-G2 [8], NewsML-G2 [9], and NITF [10] are reused in our ontology. EventsML-G2 is a standard for conveying event information in a news industry environment, which is comprehensive (many types of events may be covered) and extensible (news provider specific data may be added). NewsML-G2 provides metadata that describe the news content in an abstract way. NITF uses the XML (eXtensible Markup Language) to define the content and structure of news articles… Metadata from EventsML-G2…Metadata from NewsML-G2… Metadata from NITF” (Page 203, Section III(B))).
The examiner further notes that the secondary reference of Hu teaches the concept of multiple different XML-based metadata formats (See examples of EventsML-G2, NewsML-G2, and NITF (that each have different structures)) that can be used to describe metadata. The combination would result in expanding the selectable XML metadata format of the Soneka to also include such specific XML metadata formats. Furthermore, such XML-based metadata (i.e. via EventsML-G2, NewsML-G2, and NITF) would result in being stored in a same XML format file regardless of which one was chosen.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Hu’s would have allowed Sarubin’s and Soneka’s to provide a method for reusing metadata from different XML-based metadata formats, as noted by Hu (Page 203, Section III(B))).
Saurbin, Soneka, and Hu do not explicitly teach:
E) wherein the recording control unit controls to record identification information indicating the recording format of the additional information recorded in the additional information file into the moving image file different from the additional information file, so that an external apparatus can identify the structure type of the additional information recorded in the additional information file based on the identification information recorded in the moving image file by analyzing the moving image file without analyzing the additional information file.
Ishizu, however, teaches “wherein the recording control unit controls to record identification information indicating the recording format of the additional information recorded in the additional information file into the moving image file different from the additional information file, so that an external apparatus can identify the structure type of the additional information recorded in the additional information file based on the identification information recorded in the moving image file by analyzing the moving image file without analyzing the additional information file” as “FIG. 3B is a schematic diagram of a configuration of image data according to the present exemplary embodiment. The image data is, for example, generated by the digital camera 100 and recorded in a data recording area of the storage medium 110” (Paragraph 54) and “The Exif IFD 318 includes a tag regarding an Exif version, a tag regarding a characteristic and a structure of image data, a tag regarding a photographed date and time, a tag regarding photographed conditions in which a shutter speed and a lens focal distance are recorded, and other tags” (Paragraph 58).
The examiner further notes that although the secondary reference of Soneka teaches the recording of a metadata format (i.e. the claimed recording format of additional information), there is no explicit teaching of storing such information in a corresponding media file that the metadata corresponds to. Nevertheless, the secondary reference of Ishizu teaches the concept of storing EXIF version data (i.e. the claimed format of additional information in the broadest reasonable interpretation) in a corresponding media file. The combination would result in storing the metadata format of Soneka in the actual video file of Sarubin (which is already separate from the metadata file of Sarubin). Moreover, the combination would result in reading the version of the metadata file without actually having to read the metadata file because the version of the metadata file is stored in the media file itself.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Ishizu’s would have allowed Sarubin’s, Soneka’s, and Hu’s to provide a method for improving responses to user requests regarding media data, as noted by Ishizu (Paragraph 194).
Regarding claim 3, Sarubin does not explicitly teach an image capturing apparatus comprising:
A) wherein the selection unit selects an original format or a standardized standard format as the recording format of the additional information recorded in the XML file;
B) XML file.
Soneka, however, teaches “wherein the selection unit selects an original format or a standardized standard format as the recording format of the additional information recorded in the additional information file” as “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3) and “XML file” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2) and “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3).
The examiner further notes that although the primary reference of Sarubin teaches a separate metadata file (i.e. a file housing additional information), there is no explicit teaching that such a metadata file is in an XML format (i.e. the claimed “standardized” format in the broadest reasonable interpretation). Nevertheless, the secondary reference of Soneka teaches the concept a metadata file (i.e. a file housing information file) being in an XML format (i.e. a standardized format) in accordance to user selection. The combination would result in the metadata file of Sarubin to be in an XML format.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Soneka’s would have allowed Sarubin’s to provide a method for easily allowing users to modify metadata, as noted by Soneka (Page 1).
Sarubin, Soneka, and Hu do not explicitly teach:
B) the recording control unit controls to record, into the moving image file, a category indicating whether a format of the additional information in the file is the original format or the standard format as the identification information.
Ishizu, however, teaches “the recording control unit controls to record, into the moving image file, a category indicating whether a format of the additional information in the file is the original format or the standard format as the identification information” as “FIG. 3B is a schematic diagram of a configuration of image data according to the present exemplary embodiment. The image data is, for example, generated by the digital camera 100 and recorded in a data recording area of the storage medium 110” (Paragraph 54) and “The Exif IFD 318 includes a tag regarding an Exif version, a tag regarding a characteristic and a structure of image data, a tag regarding a photographed date and time, a tag regarding photographed conditions in which a shutter speed and a lens focal distance are recorded, and other tags” (Paragraph 58).
The examiner further notes that although the secondary reference of Soneka teaches the recording of a metadata format (i.e. the claimed recording format of additional information), there is no explicit teaching of storing such information in a corresponding media file that the metadata corresponds to. Nevertheless, the secondary reference of Ishizu teaches the concept of storing EXIF version data (i.e. an example of a standardized format of additional information in the broadest reasonable interpretation) in a corresponding media file. The combination would result in storing the standardized metadata format of Soneka and Hu in the actual video file of Sarubin.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Ishizu’s would have allowed Sarubin’s, Soneka’s, and Hu’s to provide a method for improving responses to user requests regarding media data, as noted by Ishizu (Paragraph 194).
Regarding claim 4, Saurbin and Soneka do not explicitly teach an image capturing apparatus comprising:
A) wherein the standardized standard format is a NewsML-G2 format standardized by International Press Telecommunication Council (IPTC).
Hu, however, teaches “wherein the standardized standard format is a NewsML-G2 format standardized by International Press Telecommunication Council (IPTC)” as “In this paper, a news ontology incorporating some metadata in OpenCyc, EventsML-G2, NewsML-G2, and News Industry Text Format (NITF) is designed at first” (Page 202, (Abstract)), “Some metadata in EventsML-G2 [8], NewsML-G2 [9], and NITF [10] are reused in our ontology. EventsML-G2 is a standard for conveying event information in a news industry environment, which is comprehensive (many types of events may be covered) and extensible (news provider specific data may be added). NewsML-G2 provides metadata that describe the news content in an abstract way. NITF uses the XML (eXtensible Markup Language) to define the content and structure of news articles… Metadata from NewsML-G2” (Page 203, Section III(B))).
The examiner further notes that the secondary reference of Hu teaches the concept of multiple different XML-based metadata formats (including NewsML-G2) that can be used to describe metadata. The combination would result in expanding the selectable XML metadata format of the Soneka to also include the NewsML-G2 format.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Hu’s would have allowed Sarubin’s and Soneka’s to provide a method for reusing metadata from different XML-based metadata formats, as noted by Hu (Page 203, Section III(B))).
Regarding claim 6, Sarubin, Soneka, and Hu do not explicitly teach an image capturing apparatus comprising:
A) wherein the recording control unit controls to record the identification information and a thumbnail image into a header of the moving image file.
Ishizu, however, teaches “wherein the recording control unit controls to record the identification information and a thumbnail image into a header of the moving image file” as “When the thumbnail size is specified, a thumbnail recorded in a header of image data recorded in the digital camera 100 is obtained” (Paragraph 51), “The Exif information is recorded in a header of image data, and it is necessary to read and analyze the header of the image data to recognize contents thereof” (Paragraph 53), and “The Exif IFD 318 includes a tag regarding an Exif version, a tag regarding a characteristic and a structure of image data, a tag regarding a photographed date and time, a tag regarding photographed conditions in which a shutter speed and a lens focal distance are recorded, and other tags” (Paragraph 58).
The examiner further notes that although the secondary reference of Soneka teaches the recording of a metadata format (i.e. the claimed recording format of additional information), there is no explicit teaching of storing such information in a header of a corresponding media file that the metadata corresponds to. Nevertheless, the secondary reference of Ishizu teaches the concept of storing EXIF version data (i.e. identification information) and a thumbnail image in the header of a corresponding media file. The combination would result in storing the metadata format of Soneka and Hu and a thumbnail in the actual video file of Sarubin.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Ishizu’s would have allowed Sarubin’s, Soneka’s, and Hu’s to provide a method for improving responses to user requests regarding media data, as noted by Ishizu (Paragraph 194).
Regarding claim 7, Sarubin, Soneka, and Hu do not explicitly teach an image capturing apparatus comprising:
A) wherein the recording control unit controls to record information indicating the recording format selected by the selection unit and version information as the identification information.
Ishizu, however, teaches “wherein the recording control unit controls to record information indicating the recording format selected by the selection unit and version information as the identification information” as “FIG. 3B is a schematic diagram of a configuration of image data according to the present exemplary embodiment. The image data is, for example, generated by the digital camera 100 and recorded in a data recording area of the storage medium 110” (Paragraph 54) and “The Exif IFD 318 includes a tag regarding an Exif version, a tag regarding a characteristic and a structure of image data, a tag regarding a photographed date and time, a tag regarding photographed conditions in which a shutter speed and a lens focal distance are recorded, and other tags” (Paragraph 58).
The examiner further notes that although the secondary reference of Soneka teaches the recording of a metadata format (i.e. the claimed recording format of additional information) that is based on a user selection as “identification information”, there is no explicit teaching of also recording version data of such a metadata format. Nevertheless, the secondary reference of Ishizu teaches the concept of storing EXIF version data (i.e. metadata format (EXIF) and version information) in a corresponding media file. The combination would result in storing the metadata format of Soneka and Hu in the actual video file of Sarubin.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Ishizu’s would have allowed Sarubin’s, Soneka’s, and Hu’s to provide a method for improving responses to user requests regarding media data, as noted by Ishizu (Paragraph 194).
12. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sarubin et al. (U.S. PGPUB 2020/0210753), in view of Sarubin et al. (U.S. PGPUB 2020/0210753), in view of Soneka et al. (JP 2009187213A (Machine Translation Provided), dated 20 August 2009), and in view of Hu et al. (Article entitled “Ontology Design for Online News Analysis”, dated 2009), and further in view of Ishizu et al. (U.S. PGPUB 2016/0065887) as applied to claims 1, 3-4, 6-7, and 14-15 above, and further in view of Tayeb et al. (Article entitled “Toward Metadata Removal to Preserve Privacy of Social Media Users”, by Tayeb et al., dated 2018).
13. Regarding claim 5, Sarubin, Soneka, and Hu do not explicitly teach an image capturing apparatus comprising:
A) wherein the recording control unit controls to record, into the moving image file, the identification information indicating the recording format selected by the selection unit.
Ishizu, however, teaches “wherein the recording control unit controls to record, into the moving image file, the identification information indicating the recording format selected by the selection unit” as “FIG. 3B is a schematic diagram of a configuration of image data according to the present exemplary embodiment. The image data is, for example, generated by the digital camera 100 and recorded in a data recording area of the storage medium 110” (Paragraph 54) and “The Exif IFD 318 includes a tag regarding an Exif version, a tag regarding a characteristic and a structure of image data, a tag regarding a photographed date and time, a tag regarding photographed conditions in which a shutter speed and a lens focal distance are recorded, and other tags” (Paragraph 58).
The examiner further notes that although the secondary reference of Soneka teaches the recording of a metadata format (i.e. the claimed recording format of additional information), there is no explicit teaching of storing such information in a corresponding media file that the metadata corresponds to. Nevertheless, the secondary reference of Ishizu teaches the concept of storing EXIF version data (i.e. the claimed format of additional information in the broadest reasonable interpretation) in a corresponding media file. The combination would result in storing the metadata format of Soneka and Hu in the actual video file of Sarubin.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Ishizu’s would have allowed Sarubin’s, Soneka’s, and Hu’s to provide a method for improving responses to user requests regarding media data, as noted by Ishizu (Paragraph 194).
Sarubin, Soneka, Hu, and Ishizu do not explicitly teach:
A) without recording all of the additional information recorded into the additional information file.
Tayeb, however, teaches “without recording all of the additional information recorded into the additional information file” as “We have also supplied a programming example in the Python programming language that performs the removal of all the metadata, or just the Global Positioning System location data, from images before they are uploaded” (Abstract) and “Our proposed method stems from similar programs accessible to the public, but will differ in that users are allowed to choose to strip the entire picture, or just the GPS location, which is the easiest type of metadata to glean further information from” (Section II).
The examiner further notes that the secondary reference of Tayeb teaches the concept of not recording all image metadata (See example of not recording GPS metadata via the removal of such GPS metadata before image uploading) into an image. The combination would result in allowing the users of Ishizu to selectively not record all metadata into an image.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Tayeb’s would have allowed Sarubin’s, Soneka’s, Hu’s, and Ishizu’s to provide a method for reducing media file size, as noted by Tayeb (Section III(A)).
14. Claim 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Sarubin et al. (U.S. PGPUB 2020/0210753), in view of Soneka et al. (JP 2009187213A (Machine Translation Provided), dated 20 August 2009), and in view of Hu et al. (Article entitled “Ontology Design for Online News Analysis”, dated 2009), and further in view of Ishizu et al. (U.S. PGPUB 2016/0065887) as applied to claims 1, 3-4, 6-7, and 14-15 above, and further in view of Pircher et al. (U.S. PGPUB 2014/0245123).
15. Regarding claim 8, Sarubin further teaches an image capturing apparatus comprising:
A) wherein the at least one processor is configured to further function as a communication unit that communicates with a editing device (Paragraph 38, Figure 1).
The examiner notes that Saurbin teaches “wherein the at least one processor is configured to further function as a communication unit that communicates with a editing device” as “the asset and its associated asset ID (e.g., pictures, untouched raw files, etc.) are stored in the storage system 12. It may be appreciated that changes to the assets and/or the metadata may be made via the orchestration layer service 16. As will be described in further detail below, a user may enter a command via a user interface 32 of a computing device to modify the asset. By storing the asset and its associated asset ID in the storage system 12, changes made by the user's modification via the orchestration layer service 16 can be tracked and stored in the storage system 12 and/or the master production database 18. Therefore, metadata corresponding to an asset may be updated even after initial ingesting at the interchange layer and registry classification service 14” (Paragraph 38). The examiner further notes that communication between user device (with interface 32) to remote devices 12 and/or 18 (i.e. editing devices in the broadest reasonable interpretation as the claimed editing device is undefined in the claims) teach the aforementioned.
Sarubin does not explicitly teach:
B) XML file.
Soneka, however, teaches “XML file” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2) and “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3)
The examiner further notes that although Sarubin teaches a separate metadata file (i.e. a file that houses additional information), there is no explicit teaching that such a metadata file is in an XML format. Nevertheless, Soneka teaches the concept of a metadata file (i.e. a file that houses additional information) being in an XML format. The combination would result in the metadata file of Sarubin to be in an XML format.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Soneka’s would have allowed Sarubin’s to provide a method for easily allowing users to modify metadata, as noted by Soneka (Page 1).
Sarubin and Soneka, Hu, and Ishizu do not explicitly teach:
B) wherein the communication unit transmits information that is information included in the moving image file and includes the identification information to the editing device before the communication unit transmits the file to the editing device.
Pircher, however, teaches “wherein the communication unit transmits information that is information included in the moving image file and includes the identification information to the editing device before the communication unit transmits the file to the editing device” as “the digital annotation component 206 may insert a digital annotation into a pdf document using the Adobe.RTM. Developer's API, available from Adobe Systems, Inc, of San Jose, Calif. The digital annotation 220 may be inserted into an HTML-based electronic document by providing an HTML overlay or by providing a metadata file or schema and by inserting, into the HTML file, a pointer to the metadata file or schema” (Paragraph 42).
The examiner further notes that the secondary reference of Pircher teaches the concept of a file with a metadata pointer. The combination would result in the media files of Sarubin, Soneka, Hu, and Ishizu to have a metadata pointer such that actual metadata (i.e. additional information) is transmitted after any such corresponding file (with the metadata pointer to a metadata file) is transmitted.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Pircher’s would have allowed Sarubin’s, Soneka’s, Hu’s, and Ishizu’s to provide a method for synchronizing metadata more easily, as noted by Pircher (Paragraphs 1 and 42).
Regarding claim 9, Sarubin further teaches an image capturing apparatus comprising:
A) wherein the at least one processor is configured to further function as a communication unit that communicates with a editing device (Paragraph 38, Figure 1).
The examiner notes that Saurbin teaches “wherein the at least one processor is configured to further function as a communication unit that communicates with a editing device” as “the asset and its associated asset ID (e.g., pictures, untouched raw files, etc.) are stored in the storage system 12. It may be appreciated that changes to the assets and/or the metadata may be made via the orchestration layer service 16. As will be described in further detail below, a user may enter a command via a user interface 32 of a computing device to modify the asset. By storing the asset and its associated asset ID in the storage system 12, changes made by the user's modification via the orchestration layer service 16 can be tracked and stored in the storage system 12 and/or the master production database 18. Therefore, metadata corresponding to an asset may be updated even after initial ingesting at the interchange layer and registry classification service 14” (Paragraph 38). The examiner further notes that communication between user device (with interface 32) to remote devices 12 and/or 18 (i.e. editing devices in the broadest reasonable interpretation as the claimed editing device is undefined in the claims) teach the aforementioned.
Sarubin does not explicitly teach:
B) XML file.
Soneka, however, teaches “XML file” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2) and “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3)
The examiner further notes that although Sarubin teaches a separate metadata file (i.e. a file that houses additional information), there is no explicit teaching that such a metadata file is in an XML format. Nevertheless, Soneka teaches the concept of a metadata file (i.e. a file that houses additional information) being in an XML format. The combination would result in the metadata file of Sarubin to be in an XML format.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Soneka’s would have allowed Sarubin’s to provide a method for easily allowing users to modify metadata, as noted by Soneka (Page 1).
Sarubin, Soneka, Hu, and Ishizu do not explicitly teach:
B) wherein the communication unit transmits the thumbnail image included in the header of the moving image file and the identification information to the editing device before the communication unit transmits the file to the editing device.
Pircher, however, teaches “wherein the communication unit transmits the thumbnail image included in the header of the moving image file and the identification information to the editing device before the communication unit transmits the file to the editing device” as “the digital annotation component 206 may insert a digital annotation into a pdf document using the Adobe.RTM. Developer's API, available from Adobe Systems, Inc, of San Jose, Calif. The digital annotation 220 may be inserted into an HTML-based electronic document by providing an HTML overlay or by providing a metadata file or schema and by inserting, into the HTML file, a pointer to the metadata file or schema” (Paragraph 42).
The examiner further notes that the secondary reference of Pircher teaches the concept of a file with a metadata pointer. The combination would result in the media files of Sarubin, Soneka, Hu, and Ishizu (which stores a thumbnail in the header of a media file) to have a metadata pointer such that actual metadata (i.e. additional information) is transmitted after any such corresponding file (with the metadata pointer to a metadata file) is transmitted.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Pircher’s would have allowed Sarubin’s, Soneka’s, Hu’s, and Ishizu’s to provide a method for synchronizing metadata more easily, as noted by Pircher (Paragraphs 1 and 42).
Regarding claim 10, Sarubin does not explicitly teach an image capturing apparatus comprising:
A) XML file.
Soneka, however, teaches “XML file” as “image data can be managed with items determined from the viewpoint of the user, and the metadata format can be customized according to the user” (Page 2) and “The metadata file 60 in FIG. 3 is described in an xml format” (Page 3)
The examiner further notes that although Sarubin teaches a separate metadata file (i.e. a file that houses additional information), there is no explicit teaching that such a metadata file is in an XML format. Nevertheless, Soneka teaches the concept of a metadata file (i.e. a file that houses additional information) being in an XML format. The combination would result in the metadata file of Sarubin to be in an XML format.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Soneka’s would have allowed Sarubin’s to provide a method for easily allowing users to modify metadata, as noted by Soneka (Page 1).
Sarubin, Soneka, Hu, and Ishizu do not explicitly teach:
A) wherein the communication unit transmits a requested file to the editing device in response to reception of a transmission request of the file from the editing device after the communication unit transmits the thumbnail image and the identification information to the editing device.
Pircher, however, teaches “wherein the communication unit transmits a requested file to the editing device in response to reception of a transmission request of the file from the editing device after the communication unit transmits the thumbnail image and the identification information to the editing device” as “the digital annotation component 206 may insert a digital annotation into a pdf document using the Adobe.RTM. Developer's API, available from Adobe Systems, Inc, of San Jose, Calif. The digital annotation 220 may be inserted into an HTML-based electronic document by providing an HTML overlay or by providing a metadata file or schema and by inserting, into the HTML file, a pointer to the metadata file or schema” (Paragraph 42).
The examiner further notes that the secondary reference of Pircher teaches the concept of a file with a metadata pointer. The combination would result in the media files of Sarubin, Soneka, Hu, and Ishizu (which stores a thumbnail in the header of a media file) to have a metadata pointer such that actual metadata (i.e. additional information) is transmitted after any such corresponding file (with the metadata pointer to a metadata file) is transmitted.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Pircher’s would have allowed Sarubin’s, Soneka’s, Hu, and Ishizu’s to provide a method for synchronizing metadata more easily, as noted by Pircher (Paragraphs 1 and 42).
16. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Sarubin et al. (U.S. PGPUB 2020/0210753), in view of Soneka et al. (JP 2009187213A (Machine Translation Provided), dated 20 August 2009), and in view of Hu et al. (Article entitled “Ontology Design for Online News Analysis”, dated 2009), and further in view of Ishizu et al. (U.S. PGPUB 2016/0065887), and further in view of Ishizu et al. (U.S. PGPUB 2016/0065887) as applied to claims 1, 3-4, 6-7, and 14-15 above, and further in view of Johansen (U.S. PGPUB 2009/0082888).
17. Regarding claim 13, Saurbin, Soneka, Hu, and Ishizu do not explicitly teach an image capturing apparatus comprising:
A) wherein the identification information is destination information indicating a region where the image capturing apparatus is used.
Johansen, however, teaches “wherein the identification information is destination information indicating a region where the image capturing apparatus is used” as “metadata may be related to an image, a photo, audio, a music track, an audio broadcast, an audio book, a video, a movie, a video broadcast, a stored video, a live video, a digital video recorder file, a music video, audio-visual equipment, an appliance, a content directory, and other metadata types. Metadata may be a description of content being delivered, a rating, a title, a music title, a movie title, a publisher, a right, a plurality of rights, a genre, a language, a relation, a region, a radio call signal, a radio station, a radio band, a channel number, an image name, an artist name, a music track, a playlist, a storage medium, a contributor, a date, a producer, a director, a DVD region code, a channel name, a scheduled start time, a scheduled end time, an icon, and the like” (Paragraph 116).
The examiner further notes that the secondary reference of Johansen teaches the concept of video metadata including a region code (i.e. the claimed destination information indicating a region of use). The combination would result in expanding the type of video metadata in Sarubin to also include region information.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Johansen’s would have allowed Sarubin’s, Soneka’s, Hu’s, and Ishizu’s to provide a method for expanding the type of auxiliary information din distributed media, as noted by Johansen (Paragraph 116).
Response to Arguments
18. Applicant’s arguments with respect to claims 1, 3-10, and 13-15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument (See newly applied secondary reference of Hu).
Applicant's arguments filed 10/27/2025 have been fully considered but they are not persuasive.
Applicants argue on Page 08 that “Ishizu discloses recording version information such as Exif in the header of an image file. In other words, it discloses recording version information of additional information in a moving image file, but in Ishizu, the additional information (EXIF information) is recorded in the moving image file together with the version information. Therefore, it does not disclose or suggest how the recording format of additional information, which is a file different from the moving image file, can be determined simply by analyzing the moving image file without analyzing the additional information file”. However, the examiner wishes to refer to Ishizu which states “FIG. 3B is a schematic diagram of a configuration of image data according to the present exemplary embodiment. The image data is, for example, generated by the digital camera 100 and recorded in a data recording area of the storage medium 110” (Paragraph 54) and “The Exif IFD 318 includes a tag regarding an Exif version, a tag regarding a characteristic and a structure of image data, a tag regarding a photographed date and time, a tag regarding photographed conditions in which a shutter speed and a lens focal distance are recorded, and other tags” (Paragraph 58). The examiner further notes that although Soneka teaches the recording of a metadata format (i.e. the claimed recording format of additional information), there is no explicit teaching of storing such information in a corresponding media file that the metadata corresponds to. Nevertheless, Ishizu teaches the concept of storing EXIF version data (i.e. the claimed format of additional information in the broadest reasonable interpretation) in a corresponding media file. The combination would result in storing the metadata format of Soneka in the actual video file of Sarubin (which is already separate from the metadata file of Sarubin). Moreover, the combination would result in reading the version of the metadata file without actually having to read the metadata file because the version of the metadata file is stored in the media file itself.
Conclusion
19. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. PGPUB 2013/0117309 issued to Klein on 12 May 2016. The subject matter disclosed therein is pertinent to that of claims 1, 3-10, and 13-15 (e.g., methods to process metadata).
U.S. PGPUB 2008/0005128 issued to Jayaraman on 03 January 2008. The subject matter disclosed therein is pertinent to that of claims 1, 3-10, and 13-15 (e.g., methods to process metadata).
Contact Information
20. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mahesh Dwivedi whose telephone number is (571) 272-2731. The examiner can normally be reached on Monday to Friday 8:20 am – 4:40 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached (571) 272-4085. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Mahesh Dwivedi
Primary Examiner
Art Unit 2168
November 13, 2025
/MAHESH H DWIVEDI/Primary Examiner, Art Unit 2168