DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Response to Arguments
Applicant's arguments filed February 18, 2026 have been fully considered but they are not persuasive.
With regard to claim 1, Applicant submits that the cited prior art does not teach “wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects.” Remarks, pp. 7-8.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann et al. (US 2008/0229363) and Bishop (US 2013/0054613).
Lanfermann teaches wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects ([0017], “As an example, the user might be interested in the objects ‘cow’, ‘cat’ and ‘dog’ in this order. By selecting these three objects, three virtual video channels are constructed for featuring each of these objects as a theme.” [0039], “In FIG. 2a, the user notices the object animals in the TV program. In this particular case, the user could be a parent who wants to educate his/her child about animals, or, more particularly, about cats. By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b. Assuming the user selects the object cat 204, a virtual channel showing various "cat" categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther.”).
Bishop teaches:
an item having an unknown identity ([0058], “Similarly, the EIMC 316 can identify potential keywords or keyphrases, even when misspelled, in the electronic document, and the potential keywords or keyphrases can be highlighted or emphasized to indicate that such potential keywords or keyphrases may be a match to a tag, but the level of confidence is lower because the potential keywords or keyphrases were not an exact match to a stored tag.”), and
a second notification indicating that the second item has the unknown identity ([0058], “Also, the highlighting…can be varied (e.g., using different colors, different types of highlighting or emphasis), based at least in part on the level of confidence (e.g., green indicates high level of confidence or exact match, yellow indicates a medium level of confidence, and red indicates a low level of confidence) there is that the identified item(s) of key-content is associated with a tag word or tag phrase in the data store 314 or to differentiate one potential item(s) of key-content from another item(s) of key-content in the electronic document.” … Similarly, the EIMC 316 can identify potential keywords or keyphrases, even when misspelled, in the electronic document, and the potential keywords or keyphrases can be highlighted or emphasized to indicate that such potential keywords or keyphrases may be a match to a tag, but the level of confidence is lower because the potential keywords or keyphrases were not an exact match to a stored tag.”).
Taking the teachings together, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify Lanfermann with Bishop to include the second item having an unknown identity, and a second notification indicating that the second item has the unknown identity. The modification would serve to improve the system by providing an intuitive means for indicating to a user whether an object is known or unknown. The modification would thereby improve user convenience, and would additionally facilitate user operation.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 8 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 8, respectively, of U.S. Patent No. 10194206 in view of Lanfermann et al. (US 2008/0229363).
Application claim 1 recites, in part, wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects.” Apart from the aforementioned limitations, patent claim 1 substantially encompasses the limitations presented in application claim 1.
Lanfermann provides a teaching wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects ([0017], “As an example, the user might be interested in the objects ‘cow’, ‘cat’ and ‘dog’ in this order. By selecting these three objects, three virtual video channels are constructed for featuring each of these objects as a theme.” [0039], “In FIG. 2a, the user notices the object animals in the TV program. In this particular case, the user could be a parent who wants to educate his/her child about animals, or, more particularly, about cats. By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b. Assuming the user selects the object cat 204, a virtual channel showing various "cat" categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther.”).
Considering Lanfermann, a person of ordinary skill in the art would conclude that the invention defined in application claim 1 would have been an obvious variation of the invention defined in claim 1 of the patent. Although the conflicting claims are not identical, the application claim is not patentably distinct from the patent claim because the examined application claim would have been obvious over the patent claim when considered with the cited prior art. The above analysis similarly applies to application claim 8, which corresponds to patent claim 8.
Claim 3 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 2 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 5 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 6 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 7 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 4 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 9 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 10 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 12 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 13 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 14 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 11 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 16 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 4 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 16 is generic to a species or sub-genus claimed in patent claim 4, i.e., the entire scope of the patent claim 4 falls within the scope of the claim 16 of the application.
Claim 17 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 18 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 19 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 18 of U.S. Patent No. 10194206. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claims 1 and 8 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 12, respectively, of U.S. Patent No. 11611806 in view of Lanfermann et al. (US 2008/0229363).
Application claim 1 recites, in part, wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects.” Apart from the aforementioned limitations, patent claim 1 substantially encompasses the limitations presented in application claim 1.
Lanfermann provides a teaching wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects ([0017], “As an example, the user might be interested in the objects ‘cow’, ‘cat’ and ‘dog’ in this order. By selecting these three objects, three virtual video channels are constructed for featuring each of these objects as a theme.” [0039], “In FIG. 2a, the user notices the object animals in the TV program. In this particular case, the user could be a parent who wants to educate his/her child about animals, or, more particularly, about cats. By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b. Assuming the user selects the object cat 204, a virtual channel showing various "cat" categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther.”).
Considering Lanfermann, a person of ordinary skill in the art would conclude that the invention defined in application claim 1 would have been an obvious variation of the invention defined in claim 1 of the patent. Although the conflicting claims are not identical, the application claim is not patentably distinct from the patent claim because the examined application claim would have been obvious over the patent claim when considered with the cited prior art. The above analysis similarly applies to application claim 8, which corresponds to patent claim 8.
Claim 2 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 3 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 4 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 5 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 2 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 6 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 2 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 7 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 9 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 12 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 10 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 15 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 12 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 13 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 14 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 17 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 16 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 is generic to a species or sub-genus claimed in patent claim 1, i.e., the entire scope of the patent claim 1 falls within the scope of the claim 16 of the application.
Claim 17 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 19 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11611806. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claims 1 and 8 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 11, respectively, of U.S. Patent No. 12143676 in view of Lanfermann et al. (US 2008/0229363).
Application claim 1 recites, in part, wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects.” Apart from the aforementioned limitations, patent claim 1 substantially encompasses the limitations presented in application claim 1.
Lanfermann provides a teaching wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects ([0017], “As an example, the user might be interested in the objects ‘cow’, ‘cat’ and ‘dog’ in this order. By selecting these three objects, three virtual video channels are constructed for featuring each of these objects as a theme.” [0039], “In FIG. 2a, the user notices the object animals in the TV program. In this particular case, the user could be a parent who wants to educate his/her child about animals, or, more particularly, about cats. By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b. Assuming the user selects the object cat 204, a virtual channel showing various "cat" categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther.”).
Considering Lanfermann, a person of ordinary skill in the art would conclude that the invention defined in application claim 1 would have been an obvious variation of the invention defined in claim 1 of the patent. Although the conflicting claims are not identical, the application claim is not patentably distinct from the patent claim because the examined application claim would have been obvious over the patent claim when considered with the cited prior art. The above analysis similarly applies to application claim 8, which corresponds to patent claim 8.
Claim 2 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 3 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 4 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 5 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 2 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 6 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 2 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 7 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 9 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 11 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 10 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 14 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 12 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 12 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 13 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 12 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 14 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 16 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 16 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 16 is generic to a species or sub-genus claimed in patent claim 6, i.e., the entire scope of the patent claim 6 falls within the scope of the claim 16 of the application.
Claim 17 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim 19 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of U.S. Patent No. 12143676. Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter is not patentably distinct from the subject matter claimed in the commonly owned patent.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1 and 8 each recite, in part, “wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects.”
The specification as originally-filed does not appear to disclose or suggest that “a first item in the plurality of items and a second item in the plurality of items comprise physical objects.” Accordingly, claims 1 and 8 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claims 2-7 and 9-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph due to their respective dependencies on a rejected claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-4, 8, 10-11, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann et al. (US 2008/0229363) and Bishop (US 2013/0054613).
Regarding claim 1, Lanfermann teaches a method comprising:
identifying an image ([0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105. The user can now choose and select one or more of these highlighted objects, where after a virtual channel or additional information relating to the selected objects are made available to the user e.g. by displaying the video content, music or music play-list, or the additional information via the virtual channel to the user on the TV screen.”);
identifying a plurality of items displayed in the image ([0037], “The object recognizer 101 analyzes the objects which are being displayed 104 for the user 105, in this example on a TV screen 106. … The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.”),
wherein a first item in the plurality of items and a second item in the plurality of items comprise physical objects ([0017], “As an example, the user might be interested in the objects ‘cow’, ‘cat’ and ‘dog’ in this order. By selecting these three objects, three virtual video channels are constructed for featuring each of these objects as a theme.” [0039], “In FIG. 2a, the user notices the object animals in the TV program. In this particular case, the user could be a parent who wants to educate his/her child about animals, or, more particularly, about cats. By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b. Assuming the user selects the object cat 204, a virtual channel showing various "cat" categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther.”);
initiating a search associated with the plurality of items ([0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.” [0039], “By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b.”);
determining, based on the search, that a first item has a known identity and a second item ([0037], “The object recognizer 101 analyzes the objects which are being displayed 104 for the user 105, in this example on a TV screen 106. … The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.” [0042]); and
causing a display of the image, the display comprising a first notification indicating the first item has the known identity ([0037], “The object recognizer 101 analyzes the objects which are being displayed 104 for the user 105, in this example on a TV screen 106. … The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.” [0042], Fig. 2B).
Bishop teaches:
an item having an unknown identity ([0058], “Similarly, the EIMC 316 can identify potential keywords or keyphrases, even when misspelled, in the electronic document, and the potential keywords or keyphrases can be highlighted or emphasized to indicate that such potential keywords or keyphrases may be a match to a tag, but the level of confidence is lower because the potential keywords or keyphrases were not an exact match to a stored tag.”), and
a second notification indicating that the second item has the unknown identity ([0058], “Also, the highlighting…can be varied (e.g., using different colors, different types of highlighting or emphasis), based at least in part on the level of confidence (e.g., green indicates high level of confidence or exact match, yellow indicates a medium level of confidence, and red indicates a low level of confidence) there is that the identified item(s) of key-content is associated with a tag word or tag phrase in the data store 314 or to differentiate one potential item(s) of key-content from another item(s) of key-content in the electronic document.” … Similarly, the EIMC 316 can identify potential keywords or keyphrases, even when misspelled, in the electronic document, and the potential keywords or keyphrases can be highlighted or emphasized to indicate that such potential keywords or keyphrases may be a match to a tag, but the level of confidence is lower because the potential keywords or keyphrases were not an exact match to a stored tag.”).
In view of Bishop’s teaching, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify Lanfermann to include the second item having an unknown identity, and a second notification indicating that the second item has the unknown identity. The modification would serve to improve the system by providing an intuitive means for indicating to a user whether an object is known or unknown. The modification would thereby improve user convenience, and would additionally facilitate user operation.
Regarding claims 3 and 10, the combination teaches the limitations specified above; however, the combination as presently combined does not expressly teach wherein the first notification comprises a first color displayed in association with the first item in the image, and wherein the second notification comprises a second color displayed in association with the second item in the image.
Bishop provides a teaching for analyzing electronic documents (abstract), including video content (abstract, [0002], [0007], [0080], [0100], [0136]-[0148] Fig. 13), and highlighting key video content ([0034]). Bishop additionally teaches that highlighting may comprise using different colors based on a level of confidence of a match ([0058], “Also, the highlighting (e.g., using color(s) with regard to the text or the portion of the UI screen on which the text is displayed) or emphasizing (e.g., bolding, italicizing, or changing the size) of the item(s) of key-content can be varied (e.g., using different colors, different types of highlighting or emphasis), based at least in part on the level of confidence (e.g., green indicates high level of confidence or exact match, yellow indicates a medium level of confidence, and red indicates a low level of confidence) there is that the identified item(s) of key-content is associated with a tag word or tag phrase in the data store 314…”).
In view of Bishop, it would have been obvious to one of ordinary skill in the art at the time the invention was made to further modify the combination wherein the first notification comprises a first color displayed in association with the first item in the image, and wherein the second notification comprises a second color displayed in association with the second item in the image. The modification would serve to improve the system by providing an intuitive means for indicating to a user whether an object is known or unknown. The modification would thereby improve user convenience, and would additionally facilitate user operation.
Regarding claims 4 and 11, the combination further teaches further comprising: indicating a plurality of search statuses for the plurality of items based on the search (Lanfermann: [0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.” [0039], “By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b.”).
Regarding claim 8, Lanfermann teaches a computing apparatus comprising:
a non-transitory computer-readable storage medium; at least one processor operatively coupled to the non-transitory computer-readable storage medium; and program instructions stored on the non-transitory computer-readable storage medium that, when executed by the at least one processor, direct the computing apparatus to perform a method ([0026], [0043], i.e., non-transitory computer readable media is inherent to a computer). The rejection of claim 1 under 35 USC §103 is similarly applied to the remaining limitations of claim.
Regarding claim 15, the combination further teaches wherein identifying the image comprises identifying a selection of the image in video content (Lanfermann: [0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105. The user can now choose and select one or more of these highlighted objects, where after a virtual channel or additional information relating to the selected objects are made available to the user e.g. by displaying the video content, music or music play-list, or the additional information via the virtual channel to the user on the TV screen.”).
Claim(s) 2 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann, Bishop, and El-Saban et al. (US 2011/0295851).
Regarding claims 2 and 9, the combination teaches the limitations specified above; however, the combination does not expressly teach: determining, based on the search, that the plurality of items includes a third item having a plurality of potential identities; wherein the display further comprises a third notification indicating the third item possesses the plurality of potential identities.
El-Saban teaches determining an item having a plurality of potential tags, wherein a notification indicates the item possesses the plurality of potential tags ([0057], [0058], “In 522, the display shows the captured media; here a still image of a tree. Tabs 524 allow a user to toggle between a view of suggested tags and suggested links. Here, the tag view is active. The user interface 520 displays suggested metadata received from the server. Here, the words, ‘tree’, ‘NW’ (for Northwest, a geographical area known for being heavily forested), ‘green’ and ‘red’ are displayed with labels 526. Check boxes 528 allow a user to select or de-select suggested metadata. Alternative user interface controls including, but not limited to, combo boxes and push buttons may alternatively be used. In user interface 520, the semantics of checking a checkbox 528 is to indicate that the user does not consider the particular suggested metadata to be relevant. Here, ‘red’ has been checked to mean ‘de-selection’ since the user does not consider the word ‘red’ to be relevant metadata for an image of a tree as shown in 522.” Fig. 5).
In view of El-Saban’s teaching, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the combination to include determining, based on the search, that the plurality of items includes a third item having a plurality of potential identities; wherein the display further comprises a third notification indicating the third item possesses the plurality of potential identities. The modification would allow users to easily associate identities with objects, and would additionally serve to facilitate the search and retrieval of media objects (El-Saban: Abstract).
Claim(s) 5-6 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann, Bishop, and Watanabe (US 2009/0262213).
Regarding claims 5 and 12, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein indicating the plurality of search statuses for the plurality of items comprises: for the first item: providing a first visual state indicating the search is ongoing; and providing a second visual state indicating the search is completed.
Watanabe provides a teaching for a user interface wherein an initial visual indication is modified from an initial visual state indicating that a detection process is ongoing to a modified visual state indicating that the detection process has been completed ([0085], [0089], “in FIG. 6, the framed portions in solid lines 300a, 300b show successfully followed faces in the frames while the framed portion in dashed line 301 represents a framed region including an unsuccessfully followed face which is under the detection in Step S24 (detection is performed again).” [0088], “The display representing that the unsuccessfully followed face is being detected is kept on while the processing in Steps S21 to S28, that is, the face detection in the stored area including the unsuccessfully followed face is being performed.” [0092], [0085], “Unless the counter value R is zero (Step S24), the detection is performed in the region including the unsuccessfully followed face stored in Step S22 (Step S25).” Figs. 6-7).
The examiner submits that Watanabe would have at least suggested to one having ordinary skill that, upon successful detection of the unsuccessfully followed face ([0085]-[0087], Figs. 6-7), the dashed line representing the unsuccessfully followed face would change to a solid line representing a successfully followed face.
In view of Watanabe, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the combination such that indicating the plurality of search statuses for the plurality of items comprises: for the first item: providing a first visual state indicating the search is ongoing; and providing a second visual state indicating the search is completed. The combination would serve to allow users to easily and properly understand the search status of the plurality of items (see Watanabe, [0092]). The modification would serve to facilitate user operation.
Regarding claims 6 and 13, the combination teaches the limitations specified above; however, the combination does not expressly teach, wherein indicating the plurality of search statuses for the plurality of items comprises: indicating a first visual state for the first item; and indicating a second visual state for the second item, the second visual state different from the first visual state.
Watanabe provides a teaching for indicating a first visual state for a first item; and indicating a second visual state for a second item, the second visual state different from the first visual state ([0085], [0089], “in FIG. 6, the framed portions in solid lines 300a, 300b show successfully followed faces in the frames while the framed portion in dashed line 301 represents a framed region including an unsuccessfully followed face which is under the detection in Step S24 (detection is performed again).” [0088], “The display representing that the unsuccessfully followed face is being detected is kept on while the processing in Steps S21 to S28, that is, the face detection in the stored area including the unsuccessfully followed face is being performed.” [0092], [0085], “Unless the counter value R is zero (Step S24), the detection is performed in the region including the unsuccessfully followed face stored in Step S22 (Step S25).” Fig. 6, framed portions 300a and 300b are solid line frames, and framed portion 301 is a dashed line).
In view of Watanabe’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination such that indicating the plurality of search statuses for the plurality of items comprises: indicating a first visual state for the first item; and indicating a second visual state for the second item, the second visual state different from the first visual state. The modification would serve to improve the system by providing an intuitive means for indicating to a user whether an object is known. The modification would thereby improve user convenience, and would additionally facilitate user operation.
Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann, Bishop, and Avison-Fell (US 2012/0167144).
Regarding claims 7 and 14, the combination further teaches further comprising: receiving a selection of the first item; and in response to the selection, causing display of information associated with the first item (Lanfermann: [0039], “Assuming the user selects the object cat 204, a virtual channel showing various ‘cat’ categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther. In this example, the user has the opportunity to select still another virtual channel from these categories and thereby obtain more detailed information. Assuming the user selects the normal house cat, various house cats can be displayed. This is illustrated in FIG. 2d, showing the Siamese cat 209, Persian cat 210, Tabby Persian cat 211. The user could even get further information, video/TV clips, etc. by selecting e.g. the Siamese cat 209.” Figs. 2a-2d).
However, the combination teaches the limitations specified above; however, the combination does not expressly teach in response to the selection, causing display of at least one hyperlink associated with the first item and received based on the search.
Avison-Fell teaches, in response to a selection, causing display of at least one webpage associated with an item and received based on a search ([0028], “In general, the features that the receiver is configured to detect may be associated with one or more objects that may be of potential interest to a viewer of the video frame.” [0035], “Once the receiver determines the keyword associated with the detected object, information about the detected object may be obtained from several sources. In one embodiment, the receiver may provide the keyword or phrase to an internet search engine to retrieve an informational webpage about the detected image.”).
In view of Avison-Fell’s teaching, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the combination to include in response to the selection, causing display of at least one hyperlink associated with the first item and received based on a search. The modification would enhance the combined system by providing a convenient means by which users may access a website related to a selected item.
Claim(s) 16, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann and Kikinis.
Regarding claim 16, Lanfermann teaches a non-transitory computer-readable medium having program instructions stored there on that, when executed by at least one processor ([0026], [0043], i.e., non-transitory computer readable media is inherent to a computer), perform a method, the method comprising:
identifying an image ([0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105. The user can now choose and select one or more of these highlighted objects, where after a virtual channel or additional information relating to the selected objects are made available to the user e.g. by displaying the video content, music or music play-list, or the additional information via the virtual channel to the user on the TV screen.”);
identifying a plurality of items displayed in the image ([0037], “The object recognizer 101 analyzes the objects which are being displayed 104 for the user 105, in this example on a TV screen 106. … The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.”);
initiating a search associated with the plurality of items ([0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.” [0039], “By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b.”);
identifying information associated with a first item in the plurality of items based on the search ([0039], “Assuming the user selects the object cat 204, a virtual channel showing various ‘cat’ categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther. In this example, the user has the opportunity to select still another virtual channel from these categories and thereby obtain more detailed information. Assuming the user selects the normal house cat, various house cats can be displayed. This is illustrated in FIG. 2d, showing the Siamese cat 209, Persian cat 210, Tabby Persian cat 211. The user could even get further information, video/TV clips, etc. by selecting e.g. the Siamese cat 209.” Figs. 2a-2d); and
causing a display of the image, the first item labeled in the display of the image with at least the hyperlink ([0039], “Assuming the user selects the object cat 204, a virtual channel showing various ‘cat’ categories 205, 206, 207, 208 (see FIG. 2c) is created, varying from the normal house cat to the panther. In this example, the user has the opportunity to select still another virtual channel from these categories and thereby obtain more detailed information. Assuming the user selects the normal house cat, various house cats can be displayed. This is illustrated in FIG. 2d, showing the Siamese cat 209, Persian cat 210, Tabby Persian cat 211. The user could even get further information, video/TV clips, etc. by selecting e.g. the Siamese cat 209.” Figs. 2a-2d).
Lanfermann teaches the limitations specified above; however, the combination does not expressly teach identifying a hyperlink associated with a first item and causing a display of the image, the first item labeled in the display of the image with at least the hyperlink.
Kikinis teaches identifying a hyperlink associated with a first item and causing a display of the image, the first item labeled in the display of the image with at least the hyperlink (Abstract; Col. 5, lines 17-27, “In embodiments of the present invention, individual images in TV presentations, such as persons, objects, and the like, are linked with Universal Resource Locators (URLs) in a manner that a viewer may select such images, and by so doing, invoke a linked URL, which leads to a WEB location providing information related to the image.” Col. 7, lines 56-67, “If the viewer is interested in additional information, he/she may manipulate the cursor to touch the region of emblem 57 and then actuate a selection signal, such as pressing one of the buttons 69 on the remote. On receipt of the selection signal with the cursor touching the BMW emblem, the system executes browser routines, accessing the WWW, and dials up the WEB server (see server 54 and modem 35 or 39, FIG. 1)”. Figs. 2A, C).
In view of Kikinis’ teaching, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the combination to include identifying a hyperlink associated with a first item and causing a display of the image, the first item labeled in the display of the image with at least the hyperlink. The modification would enhance the combined system by providing a convenient means by which users may access a website related to a selected item.
Regarding claim 18, the combination further teaches further comprising indicating a plurality of search statuses for the plurality of items based on the search (Lanfermann: [0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105.” [0039], “By pressing a designated ‘HyperInfo’ button on the remote control, the objects 203, 204 which are recognized by the object recognizer 101 are highlighted as illustrated by the solid lines surrounding the cat and the dog in FIG. 2b.”).
Regarding claim 20, the combination further teaches, wherein identifying the image comprises identifying a selection of the image in video content (Lanfermann: [0037], “When the user 105 is interested in a particular scene or objects in a scene, the object recognizer 101 receives a command from the user 105 to ‘freeze’ the TV scene. This can e.g. be done by pressing a ‘HyperInfo’ button on the remote control. The objects in the scene, which are recognized by the object recognizer 101, are then highlighted for the user 105. The user can now choose and select one or more of these highlighted objects, where after a virtual channel or additional information relating to the selected objects are made available to the user e.g. by displaying the video content, music or music play-list, or the additional information via the virtual channel to the user on the TV screen.”).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann, Kikinis, and Bishop.
Regarding claim 17, the combination teaches the limitations specified above; however, the combination does not expressly teach further teaches wherein the method further comprises:
determining, based on the search, that the plurality of items includes a second item having an unknown identity;
wherein the display further comprises a notification indicating the second item has the unknown identity.
Bishop teaches:
an item having an unknown identity ([0058], “Similarly, the EIMC 316 can identify potential keywords or keyphrases, even when misspelled, in the electronic document, and the potential keywords or keyphrases can be highlighted or emphasized to indicate that such potential keywords or keyphrases may be a match to a tag, but the level of confidence is lower because the potential keywords or keyphrases were not an exact match to a stored tag.”), and
a second notification indicating that the second item has the unknown identity ([0058], “Also, the highlighting…can be varied (e.g., using different colors, different types of highlighting or emphasis), based at least in part on the level of confidence (e.g., green indicates high level of confidence or exact match, yellow indicates a medium level of confidence, and red indicates a low level of confidence) there is that the identified item(s) of key-content is associated with a tag word or tag phrase in the data store 314 or to differentiate one potential item(s) of key-content from another item(s) of key-content in the electronic document.” … Similarly, the EIMC 316 can identify potential keywords or keyphrases, even when misspelled, in the electronic document, and the potential keywords or keyphrases can be highlighted or emphasized to indicate that such potential keywords or keyphrases may be a match to a tag, but the level of confidence is lower because the potential keywords or keyphrases were not an exact match to a stored tag.”).
In view of Bishop’s teaching, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify Lanfermann to include determining, based on the search, that the plurality of items includes a second item having an unknown identity; wherein the display further comprises a notification indicating the second item has the unknown identity. The modification would serve to improve the system by providing an intuitive means for indicating to a user whether an object is known or unknown. The modification would thereby improve user convenience, and would additionally facilitate user operation.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lanfermann, Kikinis, and El-Saban.
Regarding claim 19, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein the method further comprises: determining, based on the search, that the plurality of items includes a second item having a plurality of potential identities; wherein the display further comprises a notification indicating that the second item has the plurality of potential identities.
El-Saban teaches determining an item having a plurality of potential tags, wherein a notification indicates the item possesses the plurality of potential tags ([0057], [0058], “In 522, the display shows the captured media; here a still image of a tree. Tabs 524 allow a user to toggle between a view of suggested tags and suggested links. Here, the tag view is active. The user interface 520 displays suggested metadata received from the server. Here, the words, ‘tree’, ‘NW’ (for Northwest, a geographical area known for being heavily forested), ‘green’ and ‘red’ are displayed with labels 526. Check boxes 528 allow a user to select or de-select suggested metadata. Alternative user interface controls including, but not limited to, combo boxes and push buttons may alternatively be used. In user interface 520, the semantics of checking a checkbox 528 is to indicate that the user does not consider the particular suggested metadata to be relevant. Here, ‘red’ has been checked to mean ‘de-selection’ since the user does not consider the word ‘red’ to be relevant metadata for an image of a tree as shown in 522.” Fig. 5).
In view of El-Saban’s teaching, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the combination to include determining, based on the search, that the plurality of items includes a second item having a plurality of potential identities; wherein the display further comprises a notification indicating that the second item has the plurality of potential identities. The modification would allow users to easily associate identities with objects, and would additionally serve to facilitate the search and retrieval of media objects (El-Saban: Abstract).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R TELAN whose telephone number is (571)270-5940. The examiner can normally be reached 9:30AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL R TELAN/ Primary Examiner, Art Unit 2426