Dear Delegate
Thank you for registering for the An Introduction to Resource Discovery online seminar, below we have a number of documents to support and enhance your participation of the session. Please click on the links below to access each of the documents.
General Information / Useful Links
- Go to Webinar Guide
- Programme
- Delegate List
- UKSG – Strategic Vision
- UKSG Code of Conduct
- UKSG eNews – Members please sign up
- UKSG Insights – Sign up for content alerts
- Lis-e-resources – Sign up today
- Useful Acronyms & Abbreviations
- Free recorded webinar
- Upcoming events
- *Updated* UKSG Podcast series – following the scholarly coms career journeys from the knowledge community
- Get involved
- Awards & bursaries
- Open for bookings for our UKSG Forum on Tuesday 5th December in Birmingham – This years theme is Everything everywhere all at once”: keeping up with our user’s information needs in the age of open scholarship and TikTok
- Register now – November conference (online) 15th & 16th November 2023 – This years theme is “Enriching Scholarship: how libraries and publishers educate, enhance and inform scholarly works”
- Save the date for our 47th annual conference and exhibition taking place at the SEC, Glasgow 8th to 10th April 2024
Seminar Contents and Materials
| Recordings: | Day 1 recording | Day 2 recording |
| Speakers Presentations: | Day 1 | Day 2 |
Question responses
| Have you faced problems activating trials in ALMA? I find that they either work perfectly, or require lots of tweaking. How do you approach trials in general at Strathclyde?
Fiona Tinto |
I mentioned during my presentation that there is a fair bit of overlap between the work that gets handled by our Cataloguing, Serials and eResources teams. Keeping track of journal subscriptions is an area of work that is handled by Serials rather than eResources. We definitely don’t have any magic suggestions in terms of a best practice way to manage this I’m afraid. Where we have comprehensive licensing information from publishers, we’d hope that this would give clear information about titles included and perpetual access vs leased access rights. We try to get clarification if this information isn’t clear in the subscription information, but details like that do sometimes slip between the nets – particularly with smaller publishers who don’t have a formal licence document. We’d like to think that what we have activated in Alma at any given time is a pretty accurate reflection of our journal entitlements; but to be honest our serials team have a lot spreadsheets tracking our subscriptions each year, and ultimately that’s what we rely on for records and reference if queries arise. I don’t think we have any suggestions to replace the spreadsheets. It’s definitely something we find challenging – especially in terms of keeping track of historic ‘Core titles’ within ‘big deal’ subscriptions.
Where the Alma community zone offers different collections for backfile and current content we do use these. It results in extra links in our catalogue (i.e. for a single journal you will then have two separate holdings links, leading out to exactly the same place – one holdings link displaying backfile coverage dates, and the other displaying current dates) but when a provider sells their content like this and makes the corresponding catalogue records available for the separate products in this way, we definitely find it’s easier to use those record sets. I think this is a slight tangent to your question, but something we do find difficult is where there are gaps in the full text material on the provider’s platform. A typical example of where this might occur is when there has been merger, or a publisher takes over content from another publisher. If the publisher discovers at a later stage that there are gaps in the full text material they have acquired, then there is often very little they can do retrospectively to obtain the missing content. As well as the obvious problem that this creates in terms of lack of access to the full text material; this also creates big discovery problems. If bitty content is missing here and there (random issues or articles) then you can’t convey those gaps in coverage in your catalogue holdings statements. Both your catalogue holdings and your link resolver therefore indicate full text availability for content that you don’t actually have access to. It’s a problem. Again, a bit of a tangent, but here’s a concerning statement from APMA – see section on ‘Subscription Access’ in their E-Journal Guidelines for Institutional Access webpage which explains that APMA can’t guarantee that previous issues will remain available on the site – “APMA intends to retain full-text versions of all articles for all past issues posted online and to continue to make this database available to all currently active subscribers. However, because APMA cannot be certain of future technology, storage, or maintenance costs, this access cannot be guaranteed. APMA reserves the right to remove all or portions of the archive of past issues.” We’ve noticed that their publication JAPMA seems to only contain full text material from 1996 onwards. We’ve emailed APMA to query the missing pre-1996 content and are waiting to hear back from them, but I’m worried that this content may no longer be available on the site. The joys of electronic as opposed to print material… |
| What are the most common practical challenges you currently experience in managing activation/ discoverability during the transition from traditional subscription models to open access, particularly with e journals.
Fiona Tinto |
There can definitely be delays in knowledgebase changes – so there can be a lag between the journal flipping to Open Access and then the records in knowledgebases and discovery indexes updating to show that the title is OA. We prefer to keep Open Access and paywalled content activated in separate Alma electronic collections. We do this for two reasons: 1) we don’t want to put OA content into a collection that has EZproxy or WAYFless linking in place (because we don’t want to be putting OA content behind a login barrier in that way), and 2) we want to be able to report on paid-for and OA content separately in Alma Analytics. However, where we’re using Community Zone collections, if the providers don’t make a separate Community Zone collection available for their OA content then we’re restricted in our ability to do that. It’s not really something we can attempt to manage manually if the providers don’t offer separate collections. I don’t think that has a massive impact on our main end-users (i.e. Strathclyde staff and students) because these users do have credentials – so even if the content remains marked as paywalled in our catalogue, they are still going to be able to log in and access it. Ideally though we don’t want OA content to be behind login barriers from our catalogue records. There are other user groups (graduates, pre-enrolled users, the public) who we would like to be able to use our catalogue as a discovery resource, and then be able to access OA content.
I think generally the biggest challenge with discoverability of OA content is accuracy of metadata and which party is accountable/responsible for maintaining that. With OA content from the major publishers this isn’t such a problem – you can contact that publisher to report problems with their metadata for OA titles in exactly the same way as you can for their paywalled content. However, when it comes to the colossal swathes of freely available titles from smaller content providers then this is a big problem. The Community Zone collection and CDI collection of records for DOAJ has always suffered from real problems regarding the accuracy of coverage data in the records. It’s really difficult for any discovery provider to gather and maintain accurate, reliable and uptodate data for OA content. We use the Open Access via Unpaywall integration in our Primo records, which helps connect users with OA content. We do get enquiries periodically where it’s not working properly – linking users to the wrong content or to a piece of content that is actually paywalled, but actually we get very few enquiries from users reporting problems with this. An impressively low number in my opinion – my impression is that it works pretty effectively. I’ve also been very impressed with Unpaywall’s customer support and willingness to make data corrections in their knowledgebase. I’ve found them to be very responsive and keen to help whenever I’ve reported something. Something that I think can cause a bit of confusion is the Open Access flag in CDI records (i.e. the label saying OPEN ACCESS with an orange unlocked padlock icon). That’s not a functional tag – it’s not hyperlinked or anything. It displays when a content provider has flagged a record as OA in the discovery file that they send to Ex Libris. The presence of that flag doesn’t guarantee there is going to be any link to the OA content in the record. I’d say in most cases where the flag appears there is also a link in the record to the OA item or the Unpaywall integration has pulled in an OA version – and because that does happen so frequently then I think library staff can start to assume that the OA indicator flag is linked in some way to those options that link out to the full text, when that is not actually the case. And of course, the OA flag could always be appearing because a Content Provider has erroneously marked the item as OA in the discovery file they have sent to Ex Libris. I try to explain to staff that the OA flag is a separate piece of functionality to anything in the Primo record that actually connects users to the full text…. but the flag is a good indicator that there is an OA version out there, so if there isn’t a link to the OA content in the record itself then its worthwhile having a look on web or in other OA knowledgebases to see if you can track down an OA version. If we thought that the OA flag was appearing against a CDI record for a piece of content that actually wasn’t OA then we would report that to Ex Libris as an error. |
| Have you faced problems activating trials in ALMA? I find that they either work perfectly, or require lots of tweaking. How do you approach trials in general at Strathclyde?
Fiona Tinto |
We don’t currently use Alma for trials at Strathclyde – although this is something I’m looking at again, as it’s been a while since I last looked at the module.
At Strathclyde our general procedure is that we run trials only for eResources that are being genuinely considered for potential purchases. I am not happy to run trials for individual academics, students or departments who are looking for a means of gaining free short-term access to a resource that they require for a particular piece of research. I feel that it’s disingenuous for us as a library to be requesting free trials from a publisher for this reason; and I also feel that it wastes the library’s opportunity of genuinely trialling a resource – for example, if we trialled a product for a reason like that, and subsequently another department wanted to trial the resource to investigate it as a potential new acquisition, then we would have used up our opportunity of trialling that resource at that point in time. In those circumstances I wouldn’t feel that I could appropriately request another trial of the resource for at least another year (potentially longer). It’s also a lot of work to run product trials. We may request trial access for individual academic and library staff members using an individual account or referral URL – however we see this as a preliminary step allowing relevant academics and/or Faculty Librarians to view the content on the resource; with a view to then running a full institutional trial of the product if they are still interested once they’ve had an initial look at the content. If so, we’ll run a full institutional trial and set up the resource as fully as possible – in exactly the same way that we would set up the resource once purchased. I think that’s the only way for us to determine whether the product is suitable for our needs. For example, trial access for an individual staff member using an individual account tells us literally nothing about how the product functions using institutional authentication methods. If we’re requesting a trial for individual users, I’d make sure to confirm with the publisher from the outset that they’d be happy to issue us with an institution-wide trial later on if we’re still interested – I wouldn’t want any confusion there with the publisher setting up a trial for individuals and that being treated as our only opportunity to trial the product. The need to trial products can present problems in terms of having sufficient time and resources to run a trial vs the need to get resources ordered in time before e.g. end of Financial Year purchasing deadlines. We have purchased new products without trialling them beforehand, and often everything has been fine, but there have been enough occasions where we have hit problems that it makes me really unhappy with this approach. For products over a certain price threshold or that are on a different eResource platform to any of our existing content I feel quite strongly that running a full institutional trial is essential before placing an order. For products where we have other similar resources from the same provider on the same eResource platform then I think we can be slightly more relaxed about it– e.g. say we had a History, a Literature and a Political Science subject collection of eBooks from a particular provider and then we wanted to buy their Education eBook subject collection that is on the same eResource platform – we wouldn’t feel it necessary to run a trial of the Education Collection, as I would be confident that everything would function in the same way as it does for their other eBook collections. Whereas if we had e.g. existing eJournal content from a publisher and then were interested in purchasing that publisher’s Technical Standards content which lives on a completely different website then I would want to run a full institutional trial before purchasing. My experience is that where a publisher has content on different platforms, the functionality on one platform is not a guarantee of the functionality on another of their platforms. Whenever we trial a product we attempt to evaluate the resource (gathering feedback from library staff members across all relevant departments, as well as user-feedback) and provide feedback to the publisher. Due to the amount of work this involves and the time it takes, this doesn’t always happen – but that’s what we’re aiming for. Our current feedback form uses Microsoft Forms and is a work in progress. We’re attempting to get something standard in place so that the same questions can be rolled out for trials of different products, with only minimal adjustment to tailor the questions as appropriate for specific resources. That form isn’t fully live yet. We still need to work out things like what our data retention policy will be, how we’ll store the responses, what level of information we give and what consent we request from users regarding how the data will be used (e.g. is the user happy for us to share their anonymised responses and comments with publishers for the purposes of us providing product feedback to support development of the product; is the user happy for the publisher to use those anonymised comments in marketing and publicity material etc.) We need to work out those details before we can really roll the form out to users. So at the moment, the feedback we give to publishers tends to be in-depth feedback from our library staff members, but more general positive or negative comments from our users. |
Social Sharing
•Website: www.uksg.org
•Twitter: @UKSG Please use the hashtag #UKSGseminar, to tweet about our events
•Facebook: UKSG (Organisation)
•LinkedIn: UKSG Group
•SlideShare: UKSG
Should you have any issues with the files above prior to the event please contact events@uksg.org or contact the organiser (Vicky Drew) via Q&A during the event.
